<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">
Hi All,
<div class=""><br class="">
</div>
<div class="">This is all making sense and I appreciate everyone’s responses … and again I apologize for not thinking about the indirect blocks.</div>
<div class=""><br class="">
</div>
<div class="">Marc - we specifically chose 4K inodes when we created this filesystem a little over a year ago so that small files could fit in the inode and therefore be stored on the metadata SSDs.</div>
<div class=""><br class="">
</div>
<div class="">This is more of a curiosity question … is it documented somewhere how a 4K inode is used?  I understand that for very small files up to 3.5K of that can be for data, but what about for large files?  I.e., how much of that 4K is used for block
 addresses  (3.5K plus whatever portion was already allocated to block addresses??) … or what I’m really asking is, given 4K inodes and a 1M block size how big does a file have to be before it will need to use indirect blocks?</div>
<div class=""><br class="">
</div>
<div class="">Thanks again…</div>
<div class=""><br class="">
</div>
<div class="">Kevin<br class="">
<div><br class="">
<blockquote type="cite" class="">
<div class="">On Jan 23, 2018, at 1:12 PM, Marc A Kaplan <<a href="mailto:makaplan@us.ibm.com" class="">makaplan@us.ibm.com</a>> wrote:</div>
<br class="Apple-interchange-newline">
<div class=""><font size="2" face="sans-serif" class="">If one were starting over, it might make sense to use a  smaller inode size.  I believe we still support 512, 1K, 2K.</font><br class="">
<font size="2" face="sans-serif" class="">Tradeoff with the fact that inodes can store data and EAs.<br class="">
</font><br class="">
<br class="">
<br class="">
<br class="">
<font size="1" color="#5f5f5f" face="sans-serif" class="">From:        </font><font size="1" face="sans-serif" class="">"Uwe Falke" <<a href="mailto:UWEFALKE@de.ibm.com" class="">UWEFALKE@de.ibm.com</a>></font><br class="">
<font size="1" color="#5f5f5f" face="sans-serif" class="">To:        </font><font size="1" face="sans-serif" class="">gpfsug main discussion list <<a href="mailto:gpfsug-discuss@spectrumscale.org" class="">gpfsug-discuss@spectrumscale.org</a>></font><br class="">
<font size="1" color="#5f5f5f" face="sans-serif" class="">Date:        </font><font size="1" face="sans-serif" class="">01/23/2018 04:04 PM</font><br class="">
<font size="1" color="#5f5f5f" face="sans-serif" class="">Subject:        </font><font size="1" face="sans-serif" class="">Re: [gpfsug-discuss] Metadata only system pool</font><br class="">
<font size="1" color="#5f5f5f" face="sans-serif" class="">Sent by:        </font><font size="1" face="sans-serif" class=""><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" class="">gpfsug-discuss-bounces@spectrumscale.org</a></font><br class="">
<hr noshade="" class="">
<br class="">
<br class="">
<br class="">
<tt class=""><font size="2" class="">rough calculation (assuming 4k inodes): <br class="">
350 x 10^6 x 4096 Bytes=1.434TB=1.304TiB. With replication that uses <br class="">
2.877TB or 2.308TiB <br class="">
As already mentioned here, directory and indirect blocks come on top. Even <br class="">
if you could get rid of a portion of the allocated and unused inodes that <br class="">
metadata pool appears bit small to me. <br class="">
If that is a large filesystem there should be some funding to extend it. <br class="">
If you have such a many-but-small-files system as discussed recently in <br class="">
this theatre, you might still beg for more MD storage but that makes than <br class="">
a larger portion of the total cost (assuming data storage is on HDD and md <br class="">
storage on SSD) and that again reduces your chances. <br class="">
<br class="">
<br class="">
<br class="">
<br class="">
Mit freundlichen Grüßen / Kind regards<br class="">
<br class="">
<br class="">
Dr. Uwe Falke<br class="">
<br class="">
IT Specialist<br class="">
High Performance Computing Services / Integrated Technology Services / <br class="">
Data Center Services<br class="">
-------------------------------------------------------------------------------------------------------------------------------------------<br class="">
IBM Deutschland<br class="">
Rathausstr. 7<br class="">
09111 Chemnitz<br class="">
Phone: +49 371 6978 2165<br class="">
Mobile: +49 175 575 2877<br class="">
<a href="mailto:uwefalke@de.ibm.com" class="">E-Mail: uwefalke@de.ibm.com</a><br class="">
-------------------------------------------------------------------------------------------------------------------------------------------<br class="">
IBM Deutschland Business & Technology Services GmbH / Geschäftsführung: <br class="">
Thomas Wolter, Sven Schooß<br class="">
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, <br class="">
HRB 17122 <br class="">
<br class="">
<br class="">
<br class="">
<br class="">
From:   "Buterbaugh, Kevin L" <Kevin.Buterbaugh@Vanderbilt.Edu><br class="">
To:     gpfsug main discussion list <gpfsug-discuss@spectrumscale.org><br class="">
Date:   01/23/2018 06:17 PM<br class="">
Subject:        [gpfsug-discuss] Metadata only system pool<br class="">
Sent by:        gpfsug-discuss-bounces@spectrumscale.org<br class="">
<br class="">
<br class="">
<br class="">
Hi All, <br class="">
<br class="">
I was under the (possibly false) impression that if you have a filesystem <br class="">
where the system pool contains metadata only then the only thing that <br class="">
would cause the amount of free space in that pool to change is the <br class="">
creation of more inodes ? is that correct?  In other words, given that I <br class="">
have a filesystem with 130 million free (but allocated) inodes:<br class="">
<br class="">
Inode Information<br class="">
-----------------<br class="">
Number of used inodes:       218635454<br class="">
Number of free inodes:       131364674<br class="">
Number of allocated inodes:  350000128<br class="">
Maximum number of inodes:    350000128<br class="">
<br class="">
I would not expect that a user creating a few hundred or thousands of <br class="">
files could cause a ?no space left on device? error (which I?ve got one <br class="">
user getting).  There?s plenty of free data space, BTW.<br class="">
<br class="">
Now my system pool is almost ?full?:<br class="">
<br class="">
(pool total)           2.878T                                   34M (  0%) <br class="">
      140.9M ( 0%)<br class="">
<br class="">
But again, what - outside of me creating more inodes - would cause that to <br class="">
change??<br class="">
<br class="">
Thanks?<br class="">
<br class="">
Kevin<br class="">
<br class="">
?<br class="">
Kevin Buterbaugh - Senior System Administrator<br class="">
Vanderbilt University - Advanced Computing Center for Research and <br class="">
Education<br class="">
Kevin.Buterbaugh@vanderbilt.edu - (615)875-9633<br class="">
<br class="">
<br class="">
_______________________________________________<br class="">
gpfsug-discuss mailing list<br class="">
gpfsug-discuss at spectrumscale.org<br class="">
</font></tt><a href="https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss%26d%3DDwIFAw%26c%3Djf_iaSHvJObTbx-siA1ZOg%26r%3DcvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8%26m%3D8WgQlUnhzFycOZf-YvqYpdUkCiyEdRvWukE-KKRuFbE%26s%3DaCywbK-1heVHPR8Fg74z9VxkGbNfCxMdtEKIDMWVIwI%26e%3D&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C77fefde14ec54e04b35708d5629550bd%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636523315820548341&sdata=3KmA%2BL3EVMZYzDCK9sb%2FPjwi8UWcFg7tjUVHbIpaTMM%3D&reserved=0" class=""><tt class=""><font size="2" class="">https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIFAw&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=8WgQlUnhzFycOZf-YvqYpdUkCiyEdRvWukE-KKRuFbE&s=aCywbK-1heVHPR8Fg74z9VxkGbNfCxMdtEKIDMWVIwI&e=</font></tt></a><tt class=""><font size="2" class=""><br class="">
<br class="">
<br class="">
<br class="">
<br class="">
_______________________________________________<br class="">
gpfsug-discuss mailing list<br class="">
gpfsug-discuss at <a href="http://spectrumscale.org" class="">spectrumscale.org</a><br class="">
</font></tt><a href="https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss%26d%3DDwIFAw%26c%3Djf_iaSHvJObTbx-siA1ZOg%26r%3DcvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8%26m%3D8WgQlUnhzFycOZf-YvqYpdUkCiyEdRvWukE-KKRuFbE%26s%3DaCywbK-1heVHPR8Fg74z9VxkGbNfCxMdtEKIDMWVIwI%26e%3D&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C77fefde14ec54e04b35708d5629550bd%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636523315820548341&sdata=3KmA%2BL3EVMZYzDCK9sb%2FPjwi8UWcFg7tjUVHbIpaTMM%3D&reserved=0" class=""><tt class=""><font size="2" class="">https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIFAw&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=8WgQlUnhzFycOZf-YvqYpdUkCiyEdRvWukE-KKRuFbE&s=aCywbK-1heVHPR8Fg74z9VxkGbNfCxMdtEKIDMWVIwI&e=</font></tt></a><tt class=""><font size="2" class=""><br class="">
<br class="">
</font></tt><br class="">
<br class="">
<br class="">
_______________________________________________<br class="">
gpfsug-discuss mailing list<br class="">
gpfsug-discuss at <a href="http://spectrumscale.org" class="">spectrumscale.org</a><br class="">
<a href="https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C77fefde14ec54e04b35708d5629550bd%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636523315820548341&sdata=rbazh5e%2BxgHGvgF65VHTs9Hf4kk9EtUizsb19l5rr7U%3D&reserved=0" class="">https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C77fefde14ec54e04b35708d5629550bd%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636523315820548341&sdata=rbazh5e%2BxgHGvgF65VHTs9Hf4kk9EtUizsb19l5rr7U%3D&reserved=0</a><br class="">
</div>
</blockquote>
</div>
<br class="">
</div>
</body>
</html>