<font size=2 face="sans-serif">Just to clarify - its 2M <b>block </b>size,
so 64k subblock size.</font><br><font size=2 face="sans-serif"><br>Regards,<br><br>Tomer Perry<br>Scalable I/O Development (Spectrum Scale)<br>email: tomp@il.ibm.com<br>1 Azrieli Center, Tel Aviv 67021, Israel<br>Global Tel: +1 720 3422758<br>Israel Tel: +972 3 9188625<br>Mobile: +972 52 2554625<br></font><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">"Tomer Perry"
<TOMP@il.ibm.com></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">gpfsug main discussion
list <gpfsug-discuss@spectrumscale.org></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">10/04/2019 23:11</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Re: [gpfsug-discuss]
Follow-up: ESS File systems</font><br><font size=1 color=#5f5f5f face="sans-serif">Sent by:
</font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr noshade><br><br><br><font size=2 face="sans-serif">Its also important to look into the
actual space "wasted" by the "subblock mismatch".<br>For example, a snip from a filehist output I've found somewhere:</font><font size=3><br></font><font size=2 face="sans-serif"><i><br>File%ile represents the cummulative percentage of files.<br>Space%ile represents the cummulative percentage of total space used.<br>AvlSpc%ile represents the cummulative percentage used of total available
space.</i></font><font size=3><br></font><font size=2 face="sans-serif"><i><br>Histogram of files <= one 2M block in size<br>Subblocks Count File%ile Space%ile AvlSpc%ile<br>--------- -------- ---------- ---------- ----------<br> 0 1297314 2.65%
0.00% 0.00%<br> 1 34014892 72.11%
0.74% 0.59%<br> 2 2217365 76.64%
0.84% 0.67%<br> 3 1967998 80.66%
0.96% 0.77%<br> 4 798170 82.29%
1.03% 0.83%<br> 5 1518258 85.39%
1.20% 0.96%<br> 6 581539 86.58%
1.27% 1.02%<br> 7 659969 87.93%
1.37% 1.10%<br> 8 1178798 90.33%
1.58% 1.27%<br> 9 189220 90.72%
1.62% 1.30%<br> 10 130197 90.98%
1.64% 1.32%</i></font><font size=3><br><br></font><font size=2 face="sans-serif"><br>So, 72% of the files are smaller then 1 subblock ( 2M in the above case
BTW). If, for example, we'll double it - we will "waste" ~76%
of the files, and if we'll push it to 16M it will be ~90% of the files...<br>But, we really care about capacity, right? So, going into the 16M extreme,
we'll "waste" 1.58% of the capacity ( worst case of course).</font><font size=3><br></font><font size=2 face="sans-serif"><br>So, if it will give you ( highly depends on the workload of course) 4X
the performance ( just for the sake of discussion) - will it be OK to pay
the 1.5% "premium" ?</font><font size=3><br><br></font><font size=2 face="sans-serif"><br><br>Regards,<br><br>Tomer Perry<br>Scalable I/O Development (Spectrum Scale)<br>email: tomp@il.ibm.com<br>1 Azrieli Center, Tel Aviv 67021, Israel<br>Global Tel: +1 720 3422758<br>Israel Tel: +972 3 9188625<br>Mobile: +972 52 2554625</font><font size=3><br><br><br><br></font><font size=1 color=#5f5f5f face="sans-serif"><br>From: </font><font size=1 face="sans-serif">"Marc
A Kaplan" <makaplan@us.ibm.com></font><font size=1 color=#5f5f5f face="sans-serif"><br>To: </font><font size=1 face="sans-serif">gpfsug
main discussion list <gpfsug-discuss@spectrumscale.org></font><font size=1 color=#5f5f5f face="sans-serif"><br>Date: </font><font size=1 face="sans-serif">10/04/2019
20:57</font><font size=1 color=#5f5f5f face="sans-serif"><br>Subject: </font><font size=1 face="sans-serif">Re:
[gpfsug-discuss] Follow-up: ESS File systems</font><font size=1 color=#5f5f5f face="sans-serif"><br>Sent by: </font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><font size=3><br></font><hr noshade><font size=3><br><br></font><font size=2><br>If you're into pondering some more tweaks:<br><br>-i InodeSize is tunable<br><br>system pool : --metadata-block-size is tunable separately from -B
blocksize<br><br>On ESS you might want to use different block size and error correcting
codes for (v)disks that hold system pool.<br>Generally I think you'd want to set up system pool for best performance
for relatively short reads and updates.</font><font size=3><br></font><tt><font size=2><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org</font></tt><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><font size=3><br><br><br></font><tt><font size=2>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br></font></tt><br><br><BR>