<div dir="ltr"><font face="monospace">thats not how GPFS aehm Scale works :-)</font><div><font face="monospace">each client has pre-allocated inodes in memory and creating files is a matter of spooling records. yes, eventually you need to destage this to the disk, but that happens only every few seconds and given this i/os are usually very colocated so good storage cache technology can reduce i/os to physical media significant. </font></div><div><font face="monospace"><br></font></div><div><font face="monospace">to proof the point look at this numbers :</font></div><div><div><font face="monospace">-- started at 10/17/2017 14:29:13 --</font></div><div><font face="monospace"><br></font></div><div><font face="monospace">mdtest-1.9.3 was launched with 110 total task(s) on 11 node(s)</font></div><div><font face="monospace">Command line used: /ghome/oehmes/mpi/bin/mdtest-pcmpi9131-existingdir -d /ibm/fs2-16m-09/shared/mdtest-ec -i 1 -n 10000 -F -w 0 -Z -p 8 -N 11 -u</font></div><div><font face="monospace">Path: /ibm/fs2-16m-09/shared</font></div><div><font face="monospace">FS: 128.1 TiB Used FS: 0.2% Inodes: 476.8 Mi Used Inodes: 0.0%</font></div><div><font face="monospace"><br></font></div><div><font face="monospace">110 tasks, 1100000 files</font></div><div><font face="monospace"><br></font></div><div><font face="monospace">SUMMARY: (of 1 iterations)</font></div><div><font face="monospace"> Operation Max Min Mean Std Dev</font></div><div><font face="monospace"> --------- --- --- ---- -------</font></div><div><font face="monospace"> File creation : 444221.343 444221.343 444221.343 0.000</font></div><div><font face="monospace"> File stat : 6704498.841 6704498.841 6704498.841 0.000</font></div><div><font face="monospace"> File read : 3859105.596 3859105.596 3859105.596 0.000</font></div><div><font face="monospace"> File removal : 409336.606 409336.606 409336.606 0.000</font></div><div><font face="monospace"> Tree creation : 5.344 5.344 5.344 0.000</font></div><div><font face="monospace"> Tree removal : 1.145 1.145 1.145 0.000</font></div><div><font face="monospace"><br></font></div><div><font face="monospace">-- finished at 10/17/2017 14:29:27 --</font></div></div><div><font face="monospace"><br></font></div><div><font face="monospace">this is a run against a 16mb blocksize filesystem with only spinning disks (just one GL6 ESS) , not a single SSD and as you can see , this system on 11 nodes produces 444k creates / second something far above and beyond of what drives can do. </font></div><div><font face="monospace"><br></font></div><div><font face="monospace">and yes i know this stuff is all very complicated and not easy to explain :-)</font></div><div><font face="monospace"><br></font></div><div><font face="monospace">sven</font></div><div><font face="monospace"><br></font></div><div><font face="monospace"><br></font></div><div class="gmail_quote"><div dir="ltr"><font face="monospace">On Thu, Dec 21, 2017 at 8:35 PM <<a href="mailto:valdis.kletnieks@vt.edu" target="_blank">valdis.kletnieks@vt.edu</a>> wrote:<br></font></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><font face="monospace">On Thu, 21 Dec 2017 16:38:27 +0000, Sven Oehme said:<br>
<br>
> size. so if you now go to a 16MB blocksize and you have just 50 iops @ 2MB<br>
> each you can write ~800 MB/sec with the exact same setup and same size<br>
> small writes, that's a factor of 8 .<br>
<br>
That's assuming your metadata storage is able to handle open/read/write/close<br>
on enough small files per second to push 800MB/sec. If you're talking 128K subblocks,<br>
you're going to need some 6,400 small files per second to fill that pipe...<br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></font><br>
</blockquote></div></div>