[gpfsug-discuss] Biggest file that will fit inside an inode?

Luke Raimbach luke.raimbach at googlemail.com
Mon Oct 3 17:16:05 BST 2016


It doesn't, but the end result is the same... data shipped off 'somewhere
else' with a stub file?

I have in my mind that DMAPI support was around before data-in-inode (or at
least 4K inodes) was introduced, so it had to be made a bit cleverer to
cope, but I may be misremembering that.

On Mon, 3 Oct 2016 at 17:10 Simon Thompson (Research Computing - IT
Services) <S.J.Thompson at bham.ac.uk> wrote:

>
> TCT doesn't use dmapi though I thought?
> ________________________________________
> From: gpfsug-discuss-bounces at spectrumscale.org [
> gpfsug-discuss-bounces at spectrumscale.org] on behalf of Luke Raimbach [
> luke.raimbach at googlemail.com]
> Sent: 03 October 2016 17:07
> To: gpfsug main discussion list
> Subject: Re: [gpfsug-discuss] Biggest file that will fit inside an inode?
>
> Surely it wouldn't go? Maybe the data would get copied out rather than
> stubbed... DMAPI can't be stupid enough to stub data out of an inode? Can
> it? Interesting question.
>
> Maybe I'll test that one.
>
> On Mon, 3 Oct 2016 at 17:00 Simon Thompson (Research Computing - IT
> Services) <S.J.Thompson at bham.ac.uk<mailto:S.J.Thompson at bham.ac.uk>> wrote:
>
> Would you tier an in-inode file to the cloud?
>
> I mean, I wouldn't tier an in-inode file out to tape?
>
> Simon
> ________________________________________
> From: gpfsug-discuss-bounces at spectrumscale.org<mailto:
> gpfsug-discuss-bounces at spectrumscale.org> [
> gpfsug-discuss-bounces at spectrumscale.org<mailto:
> gpfsug-discuss-bounces at spectrumscale.org>] on behalf of Oesterlin, Robert
> [Robert.Oesterlin at nuance.com<mailto:Robert.Oesterlin at nuance.com>]
> Sent: 03 October 2016 16:56
> To: gpfsug main discussion list
> Subject: Re: [gpfsug-discuss] Biggest file that will fit inside an inode?
>
> What's going be taken away if you use Encryption or Transparent Cloud
> Tiering?
>
>
> Bob Oesterlin
> Sr Storage Engineer, Nuance HPC Grid
>
>
> From: <gpfsug-discuss-bounces at spectrumscale.org<mailto:
> gpfsug-discuss-bounces at spectrumscale.org>> on behalf of Marc A Kaplan <
> makaplan at us.ibm.com<mailto:makaplan at us.ibm.com>>
> Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org
> <mailto:gpfsug-discuss at spectrumscale.org>>
> Date: Monday, October 3, 2016 at 10:46 AM
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:
> gpfsug-discuss at spectrumscale.org>>
> Subject: [EXTERNAL] Re: [gpfsug-discuss] Biggest file that will fit inside
> an inode? 3968!!
>
> On a non-SELINUX system the answer is 3968 of data in a 4K inode, just 128
> bytes of metadata.
>
> Caution: it's possible in some future release, this could change ... I
> don't know of any plans, I'm just saying ...
>
> Inode 16346892 [16346892] snap 0 (index 12 in block 255420):
>   Inode address: 6:123049056 size 4096 nAddrs 330
>   indirectionLevel=INODE status=USERFILE
>   objectVersion=1 generation=0xC0156CB nlink=1
>   owner uid=0 gid=0 mode=0200100644: -rw-r--r--
>   blocksize code=5 (32 subblocks)
>   lastBlockSubblocks=0
>   checksum=0xAD8E0B4B is Valid
>   fileSize=3968 nFullBlocks=0
>   currentMetadataReplicas=1 maxMetadataReplicas=2
>   currentDataReplicas=1 maxDataReplicas=2
>   ...
>   Data [3968]:
> 0000000000000000: BCA91252 2B64BEDC A7D7BA9D D5BE8C30  *...R+d.........0*
> ...
> 0000000000000F70: DA925E2F 16A68C01 03CA5E37 08D72B7F  *..^/......^7..+.*
>   trailer: is NULL
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161003/5f6703ac/attachment-0002.htm>


More information about the gpfsug-discuss mailing list