<div dir="ltr">Daniel, <div><br></div><div>while this might be easier to think about it, its not true :-)</div><div>lets just use an example.  a disk drive can do 100 io's per second with 128kb random writes and 80 iops with 256kb writes . now lets do the math with a 8+2p setup for each of the 2 cases. this means you can do 100 times 1mb writes (8*128k) or 80 times 2 mb writes so 100 MB/sec or 160 MB/sec with the exact same drives. given you can fit 2 times as many subblocks into the 2mb block you would gain 60% of speed by just going to this larger size. so if you now go to a 16MB blocksize and you have just 50 iops @ 2MB each you can write ~800 MB/sec with the exact same setup and same size small writes, that's a factor of 8 .</div><div>so i/o size AND number of subblocks matter.  </div><div><br></div><div dir="ltr"><div>Sven</div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr">On Thu, Dec 21, 2017 at 12:22 PM Daniel Kidger <<a href="mailto:daniel.kidger@uk.ibm.com" target="_blank">daniel.kidger@uk.ibm.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto">My suggestion is that it is better to not think of the performance coming from having more than 32 sub-blocks but instead that the performance comes from smaller sub-blocks. The fact that there are now more of them in say a 4MB blocksize filesytem is just a side effect.<br><br><div id="m_2355646542901383616m_-2017726886823458605AppleMailSignature"><div style="margin-bottom:0.0001pt;line-height:normal"><span style="background-color:rgba(255,255,255,0)">Daniel</span></div><div style="margin-bottom:0.0001pt;line-height:normal"><span style="background-color:rgba(255,255,255,0)"><img alt="/spectrum_storage-banne" src="http://ausgsa.ibm.com/projects/t/tivoli_visual_design/public/2015/Spectrum-Storage/Email-signatures/Storage/spectrum_storage-banner.png" width="432.5" height="auto" style="width:601px;height:5px"></span></div><div style="margin-bottom:0.0001pt;line-height:normal"><span style="background-color:rgba(255,255,255,0)"><br> </span></div><table border="0" cellpadding="0" cellspacing="0" style="font-family:sans-serif"><tbody><tr style="word-wrap:break-word;max-width:447.5px"><td style="word-wrap:break-word;max-width:447.5px;width:201px;padding:0cm"><div style="margin-bottom:0.0001pt;line-height:normal"><img alt="Spectrum Scale Logo" src="http://ausgsa.ibm.com/projects/t/tivoli_visual_design/public/2015/Spectrum-Storage/Email-signatures/Storage/spectrum_scale-logo.png" style="width:75px;height:120px;float:left"></div><div style="margin-bottom:0.0001pt;line-height:normal"><font face=".SFUIDisplay"><span style="background-color:rgba(255,255,255,0)"> </span></font></div></td><td style="word-wrap:break-word;max-width:447.5px;width:21px;padding:0cm"><font face=".SFUIDisplay"><span style="background-color:rgba(255,255,255,0)"> </span></font></td><td style="word-wrap:break-word;max-width:447.5px;width:202px;padding:0cm"><div style="margin-bottom:0.0001pt;line-height:normal"><font face=".SFUIDisplay"><span style="background-color:rgba(255,255,255,0)"><strong>Dr Daniel Kidger</strong> <br>IBM Technical Sales Specialist<br>Software Defined Solution Sales<br><br><a dir="ltr" href="tel:+%2044-7818%20522%20266" target="_blank">+</a> <a dir="ltr" href="tel:+%2044-7818%20522%20266" target="_blank">44-(0)7818 522 266</a> <br><a dir="ltr" href="mailto:daniel.kidger@uk.ibm.com" target="_blank">daniel.kidger@uk.ibm.com</a></span></font></div></td></tr></tbody></table></div></div><div dir="auto"><div><br>On 19 Dec 2017, at 21:32, Aaron Knister <<a href="mailto:aaron.s.knister@nasa.gov" target="_blank">aaron.s.knister@nasa.gov</a>> wrote:<br><br></div></div><div dir="auto"><blockquote type="cite"><div><span>Thanks, Sven. Understood!</span><br><span></span><br><span>On 12/19/17 3:20 PM, Sven Oehme wrote:</span><br><blockquote type="cite"><span>Hi,</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>the zero padding was never promoted into a GA stream, it was an</span><br></blockquote><blockquote type="cite"><span>experiment to proof we are on the right track when we eliminate the</span><br></blockquote><blockquote type="cite"><span>overhead from client to NSD Server, but also showed that alone is not</span><br></blockquote><blockquote type="cite"><span>good enough. the work for the client is the same compared to the >32</span><br></blockquote><blockquote type="cite"><span>subblocks, but the NSD Server has more work as it can't pack as many</span><br></blockquote><blockquote type="cite"><span>subblocks and therefore files into larger blocks, so you need to do more</span><br></blockquote><blockquote type="cite"><span>writes to store the same number of files. </span><br></blockquote><blockquote type="cite"><span>thats why there is the additional substantial improvement  when we then</span><br></blockquote><blockquote type="cite"><span>went to >32 subblocks. </span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>sven</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>On Mon, Dec 18, 2017 at 9:13 PM Knister, Aaron S. (GSFC-606.2)[COMPUTER</span><br></blockquote><blockquote type="cite"><span>SCIENCE CORP] <<a href="mailto:aaron.s.knister@nasa.gov" target="_blank">aaron.s.knister@nasa.gov</a></span><br></blockquote><blockquote type="cite"><span><<a href="mailto:aaron.s.knister@nasa.gov" target="_blank">mailto:aaron.s.knister@nasa.gov</a>>> wrote:</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>    Thanks Sven! That makes sense to me and is what I thought was the</span><br></blockquote><blockquote type="cite"><span>    case which is why I was confused when I saw the reply to the thread</span><br></blockquote><blockquote type="cite"><span>    that said the >32 subblocks code had no performance impact. </span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>    A couple more question for you— in your presentation there’s a</span><br></blockquote><blockquote type="cite"><span>    benchmark that shows the file create performance without the zero</span><br></blockquote><blockquote type="cite"><span>    padding. Since you mention this is done for security reasons was</span><br></blockquote><blockquote type="cite"><span>    that feature ever promoted to a GA Scale release? I’m also wondering</span><br></blockquote><blockquote type="cite"><span>    if you could explain the performance difference between the no zero</span><br></blockquote><blockquote type="cite"><span>    padding code and the > 32 subblock code since given your the example</span><br></blockquote><blockquote type="cite"><span>    of 32K files and 16MB block size I figure both cases ought to write</span><br></blockquote><blockquote type="cite"><span>    the same amount to disk. </span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>    Thanks!</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>    -Aaron</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>    On December 15, 2017 at 18:07:23 EST, Sven Oehme <<a href="mailto:oehmes@gmail.com" target="_blank">oehmes@gmail.com</a></span><br></blockquote><blockquote type="cite"><span>    <<a href="mailto:oehmes@gmail.com" target="_blank">mailto:oehmes@gmail.com</a>>> wrote:</span><br></blockquote><blockquote type="cite"><blockquote type="cite"><span>    i thought i answered that already, but maybe i just thought about</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    answering it and then forgot about it :-D</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    so yes more than 32 subblocks per block significant increase the</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    performance of filesystems with small files, for the sake of the</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    argument let's say 32k in a large block filesystem again for sake</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    of argument say 16MB. </span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    you probably ask why ? </span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    if you create a file and write 32k into it in a pre 5.0.0 Version</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    16 MB filesystem your client actually doesn't write 32k to the NSD</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    Server, it writes 512k, because thats the subblock size and we</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    need to write the full subblock (for security reasons). so first</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    you waste significant memory on the client to cache that zero</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    padding, you waste network bandwidth and you waste NSD Server</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    cache because you store it there too. this means you overrun the</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    cache more quickly, means you start doing read/modify writes</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    earlier on all your nice large raid tracks... i guess you get the</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    story by now. </span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    in fact,  if you have a good raid code that can drive really a lot</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    of bandwidth out of individual drives like a GNR system you get</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    more performance for small file writes as larger your blocksize</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    is, because we can 'pack' more files into larger i/os and</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    therefore turn a small file create workload into a bandwidth</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    workload, essentially exactly what we did and i demonstrated in</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    the CORAL presentation . </span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    hope that makes this crystal clear now . </span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    sven</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    On Fri, Dec 15, 2017 at 10:47 PM Aaron Knister</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>    <<a href="mailto:aaron.s.knister@nasa.gov" target="_blank">aaron.s.knister@nasa.gov</a> <<a href="mailto:aaron.s.knister@nasa.gov" target="_blank">mailto:aaron.s.knister@nasa.gov</a>>> wrote:</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        Thanks, Alex. I'm all too familiar with the trade offs between</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        large</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        blocks and small files and we do use pretty robust SSD storage</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        for our</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        metadata. We support a wide range of workloads and we have</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        some folks</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        with many small (<1M) files and other folks with many large</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        (>256MB) files.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        My point in this thread is that IBM has said over and over</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        again in</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        presentations that there is a significant performance gain</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        with the >32</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        subblocks code on filesystems with large block sizes (although</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        to your</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        point I'm not clear on exactly what large means since I didn't</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        define</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        large in this context). Therefore given that the >32 subblock</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        code gives</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        a significant performance gain one could reasonably assume</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        that having a</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        filesystem with >32 subblocks is required to see this gain</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        (rather than</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        just running the >32 subblocks code on an fs w/o > 32 subblocks).</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        This lead me to ask about a migration tool because in my mind</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        if there's</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        a performance gain from having >32 subblocks on the FS I'd</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        like that</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        feature and having to manually copy 10's of PB to new hardware</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        to get</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        this performance boost is unacceptable. However, IBM can't</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        seem to make</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        up their mind about whether or not the >32 subblocks code</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        *actually*</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        provides a performance increase. This seems like a pretty</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        straightforward question.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        -Aaron</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        On 12/15/17 3:48 PM, Alex Chekholko wrote:</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Hey Aaron,</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Can you define your sizes for "large blocks" and "small</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        files"?  If you</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>dial one up and the other down, your performance will be</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        worse.  And in</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>any case it's a pathological corner case so it shouldn't</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        matter much for</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>your workflow, unless you've designed your system with the</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        wrong values.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>For example, for bioinformatics workloads, I prefer to use 256KB</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>filesystem block size, and I'd consider 4MB+ to be "large</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        block size",</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>which would make the filesystem obviously unsuitable for</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        processing</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>millions of 8KB files.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>You can make a histogram of file sizes in your existing</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        filesystems and</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>then make your subblock size (1/32 of block size) on the</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        smaller end of</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>that.   Also definitely use the "small file in inode"</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        feature and put</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>your metadata on SSD.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Regards,</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Alex</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>On Fri, Dec 15, 2017 at 11:49 AM, Aaron Knister</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span><<a href="mailto:aaron.s.knister@nasa.gov" target="_blank">aaron.s.knister@nasa.gov</a> <<a href="mailto:aaron.s.knister@nasa.gov" target="_blank">mailto:aaron.s.knister@nasa.gov</a>></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:aaron.s.knister@nasa.gov" target="_blank">mailto:aaron.s.knister@nasa.gov</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:aaron.s.knister@nasa.gov" target="_blank">mailto:aaron.s.knister@nasa.gov</a>>>> wrote:</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     Thanks, Bill.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     I still don't feel like I've got an clear answer from</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        IBM and frankly</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     the core issue of a lack of migration tool was totally</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        dodged.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     Again in Sven's presentation from SSUG @ SC17</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote></div></blockquote></div><div dir="auto"><blockquote type="cite"><div><blockquote type="cite"><blockquote type="cite"><span>         (<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_SC17_SC17-2DUG-2DCORAL-5FV3.pdf&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=EdlC_gbmU-xxT7HcFq8IYttHSMts8BdrbqDSCqnt-_g&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_SC17_SC17-2DUG-2DCORAL-5FV3.pdf&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=EdlC_gbmU-xxT7HcFq8IYttHSMts8BdrbqDSCqnt-_g&e=</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_SC17_SC17-2DUG-2DCORAL-5FV3.pdf&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=EdlC_gbmU-xxT7HcFq8IYttHSMts8BdrbqDSCqnt-_g&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_SC17_SC17-2DUG-2DCORAL-5FV3.pdf&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=EdlC_gbmU-xxT7HcFq8IYttHSMts8BdrbqDSCqnt-_g&e=</a>>)</span><br></blockquote></blockquote></div></blockquote></div><div dir="auto"><blockquote type="cite"><div><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     he mentions "It has a significant performance penalty</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        for small files in</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     large block size filesystems" and the demonstrates that</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        with several</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     mdtest runs (which show the effect with and without the >32</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     subblocks code):</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     4.2.1 base code - SUMMARY: (of 3 iterations)</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     File creation : Mean = 2237.644</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     zero-end-of-file-padding (4.2.2 + ifdef for zero</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        padding):  SUMMARY: (of</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     3 iterations)</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     File creation : Mean = 12866.842</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     more sub blocks per block (4.2.2 + morethan32subblock code):</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     File creation : Mean = 40316.721</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     Can someone (ideally Sven) give me a straight answer as</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        to whether or</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     not the > 32 subblock code actually makes a performance</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        difference for</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     small files in large block filesystems? And if not, help</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        me understand</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     why his slides and provided benchmark data have</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        consistently indicated</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     it does?</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     -Aaron</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     On 12/1/17 11:44 AM, Bill Hartner wrote:</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > ESS GL4 4u106 w/ 10 TB drives - same HW Sven reported</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        some of the</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > results @ user group meeting.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > -Bill</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > Bill Hartner</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > IBM Systems</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > Scalable I/O Development</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > Austin, Texas</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > <a href="mailto:bhartner@us.ibm.com" target="_blank">bhartner@us.ibm.com</a> <<a href="mailto:bhartner@us.ibm.com" target="_blank">mailto:bhartner@us.ibm.com</a>></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:bhartner@us.ibm.com" target="_blank">mailto:bhartner@us.ibm.com</a> <<a href="mailto:bhartner@us.ibm.com" target="_blank">mailto:bhartner@us.ibm.com</a>>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > home office <a href="tel:(512)%20784-0980" value="+15127840980" target="_blank">512-784-0980</a> <tel:(512)%20784-0980></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="tel:512-784-0980" target="_blank">tel:512-784-0980</a> <tel:(512)%20784-0980>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > Inactive hide details for Jan-Frode Myklebust</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        ---12/01/2017 06:53:44</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > AM---Bill, could you say something about what the</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        metadataJan-Frode</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > Myklebust ---12/01/2017 06:53:44 AM---Bill, could you</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        say something</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > about what the metadata-storage here was?</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        ESS/NL-SAS/3way replication?</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > From: Jan-Frode Myklebust <<a href="mailto:janfrode@tanso.net" target="_blank">janfrode@tanso.net</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:janfrode@tanso.net" target="_blank">mailto:janfrode@tanso.net</a>> <<a href="mailto:janfrode@tanso.net" target="_blank">mailto:janfrode@tanso.net</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:janfrode@tanso.net" target="_blank">mailto:janfrode@tanso.net</a>>>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > To: gpfsug main discussion list</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">gpfsug-discuss@spectrumscale.org</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">mailto:gpfsug-discuss@spectrumscale.org</a>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     <<a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">mailto:gpfsug-discuss@spectrumscale.org</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">mailto:gpfsug-discuss@spectrumscale.org</a>>>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > Date: 12/01/2017 06:53 AM</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > Subject: Re: [gpfsug-discuss] Online data migration tool</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > Sent by: <a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">gpfsug-discuss-bounces@spectrumscale.org</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">mailto:gpfsug-discuss-bounces@spectrumscale.org</a>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     <<a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">mailto:gpfsug-discuss-bounces@spectrumscale.org</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">mailto:gpfsug-discuss-bounces@spectrumscale.org</a>>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>         ------------------------------------------------------------------------</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > Bill, could you say something about what the</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        metadata-storage here was?</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > ESS/NL-SAS/3way replication?</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > I just asked about this in the internal slack channel</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        #scale-help today..</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > -jf</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > fre. 1. des. 2017 kl. 13:44 skrev Bill Hartner</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <_bhartner@us.ibm.com_</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     > <<a href="mailto:bhartner@us.ibm.com" target="_blank">mailto:bhartner@us.ibm.com</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:bhartner@us.ibm.com" target="_blank">mailto:bhartner@us.ibm.com</a>> <<a href="mailto:bhartner@us.ibm.com" target="_blank">mailto:bhartner@us.ibm.com</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:bhartner@us.ibm.com" target="_blank">mailto:bhartner@us.ibm.com</a>>>>>:</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > "It has a significant performance penalty for</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        small files in</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     large</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > block size filesystems"</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     Aaron,</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     Below are mdtest results for a test we ran for</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        CORAL - file</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     size was</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     32k.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     We have not gone back and ran the test on a file</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        system formatted</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     without > 32 subblocks. We'll do that at some point...</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     -Bill</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     -- started at 10/28/2017 17:51:38 --</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     mdtest-1.9.3 was launched with 228 total task(s)</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        on 12 node(s)</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     Command line used: /tmp/mdtest-binary-dir/mdtest -d</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     /ibm/fs2-16m-10/mdtest-60000 -i 3 -n 294912 -w</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        32768 -C -F -r</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     -p 360</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     -u -y</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     Path: /ibm/fs2-16m-10</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     FS: 128.1 TiB Used FS: 0.3% Inodes: 476.8 Mi Used</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        Inodes: 0.0%</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     228 tasks, 67239936 files</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     SUMMARY: (of 3 iterations)</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     Operation Max Min Mean Std Dev</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     --------- --- --- ---- -------</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     File creation : 51953.498 50558.517 51423.221 616.643</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     File stat : 0.000 0.000 0.000 0.000</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     File read : 0.000 0.000 0.000 0.000</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     File removal : 96746.376 92149.535 94658.774 1900.187</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     Tree creation : 1.588 0.070 0.599 0.700</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     Tree removal : 0.213 0.034 0.097 0.082</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     -- finished at 10/28/2017 19:51:54 --</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     Bill Hartner</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     IBM Systems</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     Scalable I/O Development</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     Austin, Texas_</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     __bhartner@us.ibm.com_ <<a href="mailto:bhartner@us.ibm.com" target="_blank">mailto:bhartner@us.ibm.com</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:bhartner@us.ibm.com" target="_blank">mailto:bhartner@us.ibm.com</a>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     <<a href="mailto:bhartner@us.ibm.com" target="_blank">mailto:bhartner@us.ibm.com</a> <<a href="mailto:bhartner@us.ibm.com" target="_blank">mailto:bhartner@us.ibm.com</a>>>></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     home office <a href="tel:(512)%20784-0980" value="+15127840980" target="_blank">512-784-0980</a> <tel:(512)%20784-0980></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="tel:512-784-0980" target="_blank">tel:512-784-0980</a> <tel:(512)%20784-0980>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     _</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     __gpfsug-discuss-bounces@spectrumscale.org_</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     <<a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">mailto:gpfsug-discuss-bounces@spectrumscale.org</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">mailto:gpfsug-discuss-bounces@spectrumscale.org</a>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     <<a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">mailto:gpfsug-discuss-bounces@spectrumscale.org</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">mailto:gpfsug-discuss-bounces@spectrumscale.org</a>>>> wrote on</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     11/29/2017 04:41:48 PM:</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > From: Aaron Knister <_aaron.knister@gmail.com_</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     <<a href="mailto:aaron.knister@gmail.com" target="_blank">mailto:aaron.knister@gmail.com</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:aaron.knister@gmail.com" target="_blank">mailto:aaron.knister@gmail.com</a>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:aaron.knister@gmail.com" target="_blank">mailto:aaron.knister@gmail.com</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:aaron.knister@gmail.com" target="_blank">mailto:aaron.knister@gmail.com</a>>>>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > To: gpfsug main discussion list</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     <_gpfsug-discuss@spectrumscale.org_</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     <<a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">mailto:gpfsug-discuss@spectrumscale.org</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">mailto:gpfsug-discuss@spectrumscale.org</a>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     <<a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">mailto:gpfsug-discuss@spectrumscale.org</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">mailto:gpfsug-discuss@spectrumscale.org</a>>>>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > Date: 11/29/2017 04:42 PM</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > Subject: Re: [gpfsug-discuss] Online data</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        migration tool</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > Sent by: _gpfsug-discuss-bounces@spectrumscale.org_</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     <<a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">mailto:gpfsug-discuss-bounces@spectrumscale.org</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">mailto:gpfsug-discuss-bounces@spectrumscale.org</a>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     <<a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">mailto:gpfsug-discuss-bounces@spectrumscale.org</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">mailto:gpfsug-discuss-bounces@spectrumscale.org</a>>>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > Thanks, Nikhil. Most of that was consistent with</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        my understnading,</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > however I was under the impression that the >32</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        subblocks code is</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > required to achieve the touted 50k file</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        creates/second that Sven has</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > talked about a bunch of times:</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote></div></blockquote></div><div dir="auto"><blockquote type="cite"><div><blockquote type="cite"><blockquote type="cite"><span>          _<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Manchester_08-5FResearch-5FTopics.pdf-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=V_Pb-mxqz3Ji9fHRp9Ic9_ztzMsHk1bSzTmhbgGkRKU&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Manchester_08-5FResearch-5FTopics.pdf-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=V_Pb-mxqz3Ji9fHRp9Ic9_ztzMsHk1bSzTmhbgGkRKU&e=</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>         <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Manchester_08-5FResearch-5FTopics.pdf-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=V_Pb-mxqz3Ji9fHRp9Ic9_ztzMsHk1bSzTmhbgGkRKU&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Manchester_08-5FResearch-5FTopics.pdf-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=V_Pb-mxqz3Ji9fHRp9Ic9_ztzMsHk1bSzTmhbgGkRKU&e=</a>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>          <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Manchester_08-5FResearch-5FTopics.pdf&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=UGLr4Z6sa2yWvKL99g7SuQdgwxnoZwhVmDuIbYsLqYY&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Manchester_08-5FResearch-5FTopics.pdf&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=UGLr4Z6sa2yWvKL99g7SuQdgwxnoZwhVmDuIbYsLqYY&e=</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>         <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Manchester_08-5FResearch-5FTopics.pdf&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=UGLr4Z6sa2yWvKL99g7SuQdgwxnoZwhVmDuIbYsLqYY&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Manchester_08-5FResearch-5FTopics.pdf&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=UGLr4Z6sa2yWvKL99g7SuQdgwxnoZwhVmDuIbYsLqYY&e=</a>>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>          _<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Ehningen_31-5F-2D-5FSSUG17DE-5F-2D-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=61HBHh68SJXjnUv1Lyqjzmg_Vl24EG5cZ-0Z3WgLX3A&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Ehningen_31-5F-2D-5FSSUG17DE-5F-2D-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=61HBHh68SJXjnUv1Lyqjzmg_Vl24EG5cZ-0Z3WgLX3A&e=</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>         <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Ehningen_31-5F-2D-5FSSUG17DE-5F-2D-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=61HBHh68SJXjnUv1Lyqjzmg_Vl24EG5cZ-0Z3WgLX3A&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Ehningen_31-5F-2D-5FSSUG17DE-5F-2D-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=61HBHh68SJXjnUv1Lyqjzmg_Vl24EG5cZ-0Z3WgLX3A&e=</a>></span><br></blockquote></blockquote></div></blockquote></div><div dir="auto"><blockquote type="cite"><div><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>         <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Ehningen_31-5F-2D-5FSSUG17DE-5F-2D&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Il2rMx4AtGwjVRzX89kobZ0W25vW8TGm0KJevLd7KQ8&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Ehningen_31-5F-2D-5FSSUG17DE-5F-2D&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Il2rMx4AtGwjVRzX89kobZ0W25vW8TGm0KJevLd7KQ8&e=</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>         <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Ehningen_31-5F-2D-5FSSUG17DE-5F-2D&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Il2rMx4AtGwjVRzX89kobZ0W25vW8TGm0KJevLd7KQ8&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2017_Ehningen_31-5F-2D-5FSSUG17DE-5F-2D&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Il2rMx4AtGwjVRzX89kobZ0W25vW8TGm0KJevLd7KQ8&e=</a>>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > _Sven_Oehme_-_News_from_Research.pdf</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote></div></blockquote></div><div dir="auto"><blockquote type="cite"><div><blockquote type="cite"><blockquote type="cite"><span>        _<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2016_SC16_12-5F-2D-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=fDAdLyWu9yx3_uj0z_N3IQ98yjXF7q5hDrg7ZYZYtRE&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2016_SC16_12-5F-2D-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=fDAdLyWu9yx3_uj0z_N3IQ98yjXF7q5hDrg7ZYZYtRE&e=</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2016_SC16_12-5F-2D-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=fDAdLyWu9yx3_uj0z_N3IQ98yjXF7q5hDrg7ZYZYtRE&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2016_SC16_12-5F-2D-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=fDAdLyWu9yx3_uj0z_N3IQ98yjXF7q5hDrg7ZYZYtRE&e=</a>></span><br></blockquote></blockquote></blockquote></div></blockquote></div><div dir="auto"><blockquote type="cite"><div><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>          <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2016_SC16_12-5F-2D&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=u_qcvB--uvtByHp9H471EowagobMpPLXYT_FFzMkQiw&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2016_SC16_12-5F-2D&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=u_qcvB--uvtByHp9H471EowagobMpPLXYT_FFzMkQiw&e=</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>         <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2016_SC16_12-5F-2D&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=u_qcvB--uvtByHp9H471EowagobMpPLXYT_FFzMkQiw&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__files.gpfsug.org_presentations_2016_SC16_12-5F-2D&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=u_qcvB--uvtByHp9H471EowagobMpPLXYT_FFzMkQiw&e=</a>>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        _Sven_Oehme_Dean_Hildebrand_-_News_from_IBM_Research.pdf</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > from those presentations regarding 32 subblocks:</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > "It has a significant performance penalty for</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        small files in large</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > block size filesystems"</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > although I'm not clear on the specific</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        definition of "large". Many</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > filesystems I encounter only have a 1M block</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        size so it may not</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > matter there, although that same presentation</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        clearly shows the</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > benefit of larger block sizes which is yet</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        *another* thing for which</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > a migration tool would be helpful.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > -Aaron</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > On Wed, Nov 29, 2017 at 2:08 PM, Nikhil Khandelwal</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     <_nikhilk@us.ibm.com_ <<a href="mailto:nikhilk@us.ibm.com" target="_blank">mailto:nikhilk@us.ibm.com</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:nikhilk@us.ibm.com" target="_blank">mailto:nikhilk@us.ibm.com</a>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     <<a href="mailto:nikhilk@us.ibm.com" target="_blank">mailto:nikhilk@us.ibm.com</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        <<a href="mailto:nikhilk@us.ibm.com" target="_blank">mailto:nikhilk@us.ibm.com</a>>>>> wrote:</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > Hi,</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > I would like to clarify migration path to 5.0.0</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        from 4.X.X</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     clusters.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > For all Spectrum Scale clusters that are</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        currently at 4.X.X,</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     it is</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > possible to migrate to 5.0.0 with no offline</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        data migration</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     and no</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > need to move data. Once these clusters are at</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        5.0.0, they will</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > benefit from the performance improvements, new</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        features (such as</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > file audit logging), and various enhancements</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        that are</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     included in</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     5.0.0.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > That being said, there is one enhancement that</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        will not be</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     applied</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > to these clusters, and that is the increased</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        number of</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     sub-blocks</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > per block for small file allocation. This means</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        that for file</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > systems with a large block size and a lot of</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        small files, the</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > overall space utilization will be the same it</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        currently is</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     in 4.X.X.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > Since file systems created at 4.X.X and earlier</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        used a block</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     size</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > that kept this allocation in mind, there should</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        be very little</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > impact on existing file systems.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > Outside of that one particular function, the</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        remainder of the</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > performance improvements, metadata improvements,</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        updated</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > compatibility, new functionality, and all of the</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        other</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     enhancements</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > will be immediately available to you once you</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        complete the</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     upgrade</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > to 5.0.0 -- with no need to reformat, move data,</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        or take</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     your data</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     offline.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > I hope that clarifies things a little and makes</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        the upgrade path</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > more accessible.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > Please let me know if there are any other</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>        questions or concerns.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > Thank you,</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > Nikhil Khandelwal</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > Spectrum Scale Development</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > Client Adoption</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > _______________________________________________</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > gpfsug-discuss mailing list</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > gpfsug-discuss at _spectrumscale.org_</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>          <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Q-P8kRqnjsWB7ePz6YtA3U0xguo7-lVWKmb_zyZPndE&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Q-P8kRqnjsWB7ePz6YtA3U0xguo7-lVWKmb_zyZPndE&e=</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>         <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Q-P8kRqnjsWB7ePz6YtA3U0xguo7-lVWKmb_zyZPndE&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Q-P8kRqnjsWB7ePz6YtA3U0xguo7-lVWKmb_zyZPndE&e=</a>>></span><br></blockquote></blockquote></div></blockquote></div><div dir="auto"><blockquote type="cite"><div><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > _<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=uD-N75Y8hXNsZ7FmnqLA4D6P8WsMrRGMIM9-Oy2vIgE&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=uD-N75Y8hXNsZ7FmnqLA4D6P8WsMrRGMIM9-Oy2vIgE&e=</a></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=uD-N75Y8hXNsZ7FmnqLA4D6P8WsMrRGMIM9-Oy2vIgE&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss-5F&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=ROiUtPAdbQ6DF9wWYS4MIUax_Xetm1p9qXbKzt6ZVf4&s=uD-N75Y8hXNsZ7FmnqLA4D6P8WsMrRGMIM9-Oy2vIgE&e=</a>></span><br></blockquote></blockquote></blockquote></div></blockquote></div><div dir="auto"><blockquote type="cite"><div><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>          <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=WolSBY_TPJVJVPj5WEZ6JAbDZQK3j7oqn8u_Y5xORkE&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=WolSBY_TPJVJVPj5WEZ6JAbDZQK3j7oqn8u_Y5xORkE&e=</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>         <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=WolSBY_TPJVJVPj5WEZ6JAbDZQK3j7oqn8u_Y5xORkE&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=WolSBY_TPJVJVPj5WEZ6JAbDZQK3j7oqn8u_Y5xORkE&e=</a>>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     ></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > _______________________________________________</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > gpfsug-discuss mailing list</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >     > gpfsug-discuss at _spectrumscale.org_</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>     >   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>   </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>          <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Q-P8kRqnjsWB7ePz6YtA3U0xguo7-lVWKmb_zyZPndE&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Ew59QH6nxuyx6oTs7a8AYX7kKG3gaWUGDGo5ZZr3wQ4&m=KLv9eH4GG8WlXC5ENj_jXnzCpm60QSNAADfp6s94oa4&s=Q-P8kRqnjsWB7ePz6YtA3U0xguo7-lVWKmb_zyZPndE&e=</a></span><br></blockquote></blockquote></div></blockquote></div></blockquote></div></div>