<font size=2 face="sans-serif">(I can answer your basic questions, Sven
has more experience with tuning very large file systems, so perhaps he
will have more to say...)</font><br><br><font size=2 face="sans-serif">1. Inodes are packed into the file of
inodes. (There is one file of all the inodes in a filesystem). </font><br><br><font size=2 face="sans-serif">If you have metadata-blocksize 1MB you
will have 256 of 4KB inodes per block. Forget about sub-blocks when
it comes to the file of inodes.</font><br><font size=2 face="sans-serif"><br>2. IF a file's data fits in its inode, then migrating that file from one
pool to another just changes the preferred pool name in the inode. No
data movement. Should the file later "grow" to require
a data block, that data block will be allocated from whatever pool is named
in the inode at that time.</font><br><br><font size=2 face="sans-serif">See the email I posted earlier today.
Basically: FORGET what you thought you knew about optimal metadata
blocksize (perhaps based on how you thought metadata was laid out on disk)
and just stick to optimal IO transfer blocksizes. </font><br><br><font size=2 face="sans-serif">Yes, there may be contrived scenarios
or even a few real live special cases, but those would be few and far between.
</font><br><font size=2 face="sans-serif">Try following the newer general, easier,
rule and see how well it works.</font><br><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">"Buterbaugh, Kevin
L" <Kevin.Buterbaugh@Vanderbilt.Edu></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">gpfsug main discussion
list <gpfsug-discuss@spectrumscale.org></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">09/24/2016 10:19 AM</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Re: [gpfsug-discuss]
Blocksize</font><br><font size=1 color=#5f5f5f face="sans-serif">Sent by:
</font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr noshade><br><br><br><font size=3>Hi Sven, </font><br><br><font size=3>I am confused by your statement that the metadata block
size should be 1 MB and am very interested in learning the rationale behind
this as I am currently looking at all aspects of our current GPFS configuration
and the possibility of making major changes.</font><br><br><font size=3>If you have a filesystem with only metadataOnly disks
in the system pool and the default size of an inode is 4K (which we would
do, since we have recently discovered that even on our scratch filesystem
we have a bazillion files that are 4K or smaller and could therefore have
their data stored in the inode, right?), then why would you set the metadata
block size to anything larger than 128K when a sub-block is 1/32nd of a
block? I.e., with a 1 MB block size for metadata wouldn’t you be
wasting a massive amount of space?</font><br><br><font size=3>What am I missing / confused about there?</font><br><br><font size=3>Oh, and here’s a related question … let’s just say
I have the above configuration … my system pool is metadata only and is
on SSD’s. Then I have two other dataOnly pools that are spinning
disk. One is for “regular” access and the other is the “capacity”
pool … i.e. a pool of slower storage where we move files with large access
times. I have a policy that says something like “move all files
with an access time > 6 months to the capacity pool.” Of those
bazillion files less than 4K in size that are fitting in the inode currently,
probably half a bazillion (<grin>) of them would be subject to that
rule. Will they get moved to the spinning disk capacity pool or will
they stay in the inode??</font><br><br><font size=3>Thanks! This is a very timely and interesting discussion
for me as well...</font><br><br><font size=3>Kevin</font><br><br><font size=3>On Sep 23, 2016, at 4:35 PM, Sven Oehme <</font><a href=mailto:oehmes@us.ibm.com><font size=3 color=blue><u>oehmes@us.ibm.com</u></font></a><font size=3>>
wrote:</font><br><p><font size=3>your metadata block size these days should be 1 MB and
there are only very few workloads for which you should run with a filesystem
blocksize below 1 MB. so if you don't know exactly what to pick, 1 MB is
a good starting point. <br>the general rule still applies that your filesystem blocksize (metadata
or data pool) should match your raid controller (or GNR vdisk) stripe size
of the particular pool.<br><br>so if you use a 128k strip size(defaut in many midrange storage controllers)
in a 8+2p raid array, your stripe or track size is 1 MB and therefore the
blocksize of this pool should be 1 MB. i see many customers in the field
using 1MB or even smaller blocksize on RAID stripes of 2 MB or above and
your performance will be significant impacted by that. <br><br>Sven<br><br>------------------------------------------<br>Sven Oehme <br>Scalable Storage Research <br>email: </font><a href=mailto:oehmes@us.ibm.com><font size=3 color=blue><u>oehmes@us.ibm.com</u></font></a><font size=3><br>Phone: +1 (408) 824-8904 <br>IBM Almaden Research Lab <br>------------------------------------------<br><br><graycol.gif></font><font size=3 color=#424282>Stephen Ulmer ---09/23/2016
12:16:34 PM---Not to be too pedantic, but I believe the the subblock size
is 1/32 of the block size (which strengt</font><font size=3><br></font><font size=2 color=#5f5f5f><br>From: </font><font size=2>Stephen Ulmer <</font><a href=mailto:ulmer@ulmer.org><font size=2 color=blue><u>ulmer@ulmer.org</u></font></a><font size=2>></font><font size=2 color=#5f5f5f><br>To: </font><font size=2>gpfsug main discussion list <</font><a href="mailto:gpfsug-discuss@spectrumscale.org"><font size=2 color=blue><u>gpfsug-discuss@spectrumscale.org</u></font></a><font size=2>></font><font size=2 color=#5f5f5f><br>Date: </font><font size=2>09/23/2016 12:16 PM</font><font size=2 color=#5f5f5f><br>Subject: </font><font size=2>Re: [gpfsug-discuss] Blocksize</font><font size=2 color=#5f5f5f><br>Sent by: </font><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org"><font size=2 color=blue><u>gpfsug-discuss-bounces@spectrumscale.org</u></font></a><p><hr noshade><font size=3><br><br></font><font size=4><br>Not to be too pedantic, but I believe the the subblock size is 1/32 of
the block size (which strengthens Luis’s arguments below).</font><font size=3><br></font><font size=4><br>I thought the the original question was NOT about inode size, but about
metadata block size. You can specify that the system pool have a different
block size from the rest of the filesystem, providing that it ONLY holds
metadata (—metadata-block-size option to mmcrfs).</font><font size=3><br></font><font size=4><br>So with 4K inodes (which should be used for all new filesystems without
some counter-indication), I would think that we’d want to use a metadata
block size of 4K*32=128K. This is independent of the regular block size,
which you can calculate based on the workload if you’re lucky.</font><font size=3><br></font><font size=4><br>There could be a great reason NOT to use 128K metadata block size, but
I don’t know what it is. I’d be happy to be corrected about this if it’s
out of whack.</font><font size=3><br></font><font size=4><br>-- <br>Stephen<br></font><font size=3><br></font><br><font size=4>On Sep 22, 2016, at 3:37 PM, Luis Bolinches <</font><a href=mailto:luis.bolinches@fi.ibm.com><font size=4 color=blue><u>luis.bolinches@fi.ibm.com</u></font></a><font size=4>>
wrote:</font><font size=3><br></font><font size=3 face="Arial"><br>Hi</font><font size=3><br></font><font size=3 face="Arial"><br>My 2 cents.</font><font size=3><br></font><font size=3 face="Arial"><br>Leave at least 4K inodes, then you get massive improvement on small files
(less 3.5K minus whatever you use on xattr)</font><font size=3><br></font><font size=3 face="Arial"><br>About blocksize for data, unless you have actual data that suggest that
you will actually benefit from smaller than 1MB block, leave there. GPFS
uses sublocks where 1/16th of the BS can be allocated to different files,
so the "waste" is much less than you think on 1MB and you get
the throughput and less structures of much more data blocks.</font><font size=3><br></font><font size=3 face="Arial"><br>No<b> warranty at all</b> but I try to do this when the BS talk comes in:
(might need some clean up it could not be last note but you get the idea)</font><font size=3><br></font><font size=3 face="Arial"><br>POSIX<br>find . -type f -name '*' -exec ls -l {} \; > find_ls_files.out<br>GPFS<br>cd /usr/lpp/mmfs/samples/ilm<br>gcc mmfindUtil_processOutputFile.c -o mmfindUtil_processOutputFile<br>./mmfind /gpfs/shared -ls -type f > find_ls_files.out<br>CONVERT to CSV<br><br>POSIX<br>cat find_ls_files.out | awk '{print $5","}' > find_ls_files.out.csv<br>GPFS<br>cat find_ls_files.out | awk '{print $7","}' > find_ls_files.out.csv<br>LOAD in octave<br><br>FILESIZE = int32 (dlmread ("find_ls_files.out.csv", ","));<br>Clean the second column (OPTIONAL as the next clean up will do the same)<br><br>FILESIZE(:,[2]) = [];<br>If we are on 4K aligment we need to clean the files that go to inodes (WELL
not exactly ... extended attributes! so maybe use a lower number!)<br><br>FILESIZE(FILESIZE<=3584) =[];<br>If we are not we need to clean the 0 size files<br><br>FILESIZE(FILESIZE==0) =[];<br>Median<br><br>FILESIZEMEDIAN = int32 (median (FILESIZE))<br>Mean<br><br>FILESIZEMEAN = int32 (mean (FILESIZE))<br>Variance<br><br>int32 (var (FILESIZE))<br>iqr interquartile range, i.e., the difference between the upper and lower
quartile, of the input data.<br><br>int32 (iqr (FILESIZE))<br>Standard deviation</font><font size=3><br><br></font><font size=3 face="Arial"><br>For some FS with lots of files you might need a rather powerful machine
to run the calculations on octave, I never hit anything could not manage
on a 64GB RAM Power box. Most of the times it is enough with my laptop.</font><font size=3><br><br></font><font size=3 face="Arial"><br><br>--<br>Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations<br><br>Luis Bolinches<br>Lab Services</font><font size=3 color=blue><u><br></u></font><a href="http://www-03.ibm.com/systems/services/labservices/"><font size=3 color=blue face="Arial"><u>http://www-03.ibm.com/systems/services/labservices/</u></font></a><font size=3 face="Arial"><br><br>IBM Laajalahdentie 23 (main Entrance) Helsinki, 00330 Finland<br>Phone: +358 503112585<br><br>"If you continually give you will continually have." Anonymous</font><font size=3><br><br></font><font size=3 face="Arial"><br>----- Original message -----<br>From: Stef Coene <</font><a href=mailto:stef.coene@docum.org><font size=3 color=blue face="Arial"><u>stef.coene@docum.org</u></font></a><font size=3 face="Arial">><br>Sent by: </font><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org"><font size=3 color=blue face="Arial"><u>gpfsug-discuss-bounces@spectrumscale.org</u></font></a><font size=3 face="Arial"><br>To: gpfsug main discussion list <</font><a href="mailto:gpfsug-discuss@spectrumscale.org"><font size=3 color=blue face="Arial"><u>gpfsug-discuss@spectrumscale.org</u></font></a><font size=3 face="Arial">><br>Cc:<br>Subject: Re: [gpfsug-discuss] Blocksize<br>Date: Thu, Sep 22, 2016 10:30 PM</font><font size=3><br></font><tt><font size=3><br>On 09/22/2016 09:07 PM, J. Eric Wonderley wrote:<br>> It defaults to 4k:<br>> mmlsfs testbs8M -i<br>> flag value
description<br>> ------------------- ------------------------<br>> -----------------------------------<br>> -i 4096
Inode
size in bytes<br>><br>> I think you can make as small as 512b. Gpfs will store very
small<br>> files in the inode.<br>><br>> Typically you want your average file size to be your blocksize and
your<br>> filesystem has one blocksize and one inodesize.<br><br>The files are not small, but around 20 MB on average.<br>So I calculated with IBM that a 1 MB or 2 MB block size is best.<br><br>But I'm not sure if it's better to use a smaller block size for the<br>metadata.<br><br>The file system is not that large (400 TB) and will hold backup data<br>from CommVault.<br><br><br>Stef<br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font></tt><a href=http://spectrumscale.org/><tt><font size=3 color=blue><u>spectrumscale.org</u></font></tt></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target=_blank><tt><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><font size=3><br><br></font><font size=4><br><br>Ellei edellä ole toisin mainittu: / Unless stated otherwise above:<br>Oy IBM Finland Ab<br>PL 265, 00101 Helsinki, Finland<br>Business ID, Y-tunnus: 0195876-3 <br>Registered in Finland<br><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font><a href=http://spectrumscale.org/><font size=4 color=blue><u>spectrumscale.org</u></font></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><font size=4 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><br><tt><font size=3>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font></tt><a href=http://spectrumscale.org/><tt><font size=3 color=blue><u>spectrumscale.org</u></font></tt></a><tt><font size=3 color=blue><u><br></u></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><font size=3><br><br><br></font><br><font size=3>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font><a href=http://spectrumscale.org/><font size=3 color=blue><u>spectrumscale.org</u></font></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><br><tt><font size=2>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br></font></tt><br><BR>