<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Hi, Kumaran, <br>
    </p>
    <p>that would explain the smaller IOs before the reboot, but not the
      larger-than-4MiB IOs afterwards on that machine.<br>
    </p>
    <p>Then, I already saw that the numaMemoryInterleave setting seems
      to have no effect (on that very installation), I just have not yet
      requested a PMR for it. I'd checked memory usage of course and saw
      that regardless of this setting always one socket's memory is
      almost completely consumed while the other one's is rather empty -
      looks like a bug to me, but that needs further investigation.</p>
    <p>Uwe<br>
    </p>
    <div class="moz-cite-prefix"><br>
    </div>
    <div class="moz-cite-prefix">On 24.02.22 15:32, Kumaran Rajaram
      wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:SJ0PR18MB401111646BFC0D2433DA2D4DBB3D9@SJ0PR18MB4011.namprd18.prod.outlook.com">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      <meta name="Generator" content="Microsoft Word 15 (filtered
        medium)">
      <!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]-->
      <style>@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}@font-face
        {font-family:Consolas;
        panose-1:2 11 6 9 2 2 4 3 2 4;}@font-face
        {font-family:"Lucida Console";
        panose-1:2 11 6 9 4 5 4 2 2 4;}p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0in;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:blue;
        text-decoration:underline;}pre
        {mso-style-priority:99;
        mso-style-link:"HTML Preformatted Char";
        margin:0in;
        font-size:10.0pt;
        font-family:"Courier New";}span.pfptpreheader1
        {mso-style-name:pfptpreheader1;
        display:none;}span.pfpttitlemso1
        {mso-style-name:pfpttitlemso1;
        font-family:"Arial",sans-serif;
        color:black;
        font-weight:bold;}span.pfptsubtitlemso1
        {mso-style-name:pfptsubtitlemso1;
        font-family:"Arial",sans-serif;}span.HTMLPreformattedChar
        {mso-style-name:"HTML Preformatted Char";
        mso-style-priority:99;
        mso-style-link:"HTML Preformatted";
        font-family:"Consolas",serif;}span.EmailStyle37
        {mso-style-type:personal-reply;
        font-family:"Calibri",sans-serif;
        color:windowtext;}.MsoChpDefault
        {mso-style-type:export-only;
        font-size:10.0pt;}div.WordSection1
        {page:WordSection1;}</style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
      <div class="WordSection1">
        <p class="MsoNormal">Hi Uwe,<o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p class="MsoNormal">>> But what puzzles me even more: one
          of the server compiles IOs even smaller, varying between
          3.2MiB and 3.6MiB mostly - both for reads and writes ... I
          just cannot see why.<o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p class="MsoNormal">IMHO, If GPFS on this particular NSD server
          was restarted often during the setup, then it is possible that
          the GPFS pagepool may not be contiguous. As a result, GPFS
          8MiB buffer in the pagepool might be a scatter-gather (SG)
          list with many small entries (in the memory) resulting in
          smaller I/O when these buffers are issued to the disks. The
          fix would be to reboot the server and start GPFS so that
          pagepool is contiguous resulting in 8MiB buffer to be
          comprised of 1 (or fewer) SG entries.
          <o:p></o:p></p>
        <p>>>In the current situation (i.e. with IOs bit larger
          than 4MiB) setting max_sectors_kB to 4096 might do the trick,
          but as I do not know the cause for that behaviour it might
          well start to issue IOs >>smaller than 4MiB again at
          some point, so that is not a nice solution.<o:p></o:p></p>
        <p class="MsoNormal">It will be advised not to restart GPFS
          often in the NSD servers (in production) to keep the pagepool
          contiguous. Ensure that there is enough free memory in NSD
          server and not run any memory intensive jobs so that pagepool
          is not impacted (e.g. swapped out).    <o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p class="MsoNormal">Also, enable GPFS numaMemoryInterleave=yes
          and verify that pagepool is equally distributed across the
          NUMA domains for good performance. GPFS
          numaMemoryInterleave=yes requires that numactl packages are
          installed and then GPFS restarted.<o:p></o:p></p>
        <p class="MsoNormal"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console""><o:p> </o:p></span></p>
        <p class="MsoNormal" style="text-autospace:none"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console""># mmfsadm dump config | egrep
            "numaMemory|pagepool "<o:p></o:p></span></p>
        <p class="MsoNormal" style="text-autospace:none"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console"">! numaMemoryInterleave yes<o:p></o:p></span></p>
        <p class="MsoNormal"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console"">! pagepool 282394099712<o:p></o:p></span></p>
        <p class="MsoNormal"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console""><o:p> </o:p></span></p>
        <p class="MsoNormal" style="text-autospace:none"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console""># pgrep mmfsd | xargs numastat -p<o:p></o:p></span></p>
        <p class="MsoNormal" style="text-autospace:none"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console""><o:p> </o:p></span></p>
        <p class="MsoNormal" style="text-autospace:none"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console"">Per-node process memory usage (in MBs) for
            PID 2120821 (mmfsd)<o:p></o:p></span></p>
        <p class="MsoNormal" style="text-autospace:none"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console"">                           Node 0         
            Node 1           Total<o:p></o:p></span></p>
        <p class="MsoNormal" style="text-autospace:none"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console"">                  ---------------
            --------------- ---------------<o:p></o:p></span></p>
        <p class="MsoNormal" style="text-autospace:none"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console"">Huge                         0.00           
            0.00            0.00<o:p></o:p></span></p>
        <p class="MsoNormal" style="text-autospace:none"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console"">Heap                         1.26           
            3.26            4.52<o:p></o:p></span></p>
        <p class="MsoNormal" style="text-autospace:none"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console"">Stack                        0.01           
            0.01            0.02<o:p></o:p></span></p>
        <p class="MsoNormal" style="text-autospace:none"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console"">Private                 137710.43      
            137709.96       275420.39<o:p></o:p></span></p>
        <p class="MsoNormal" style="text-autospace:none"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console"">----------------  ---------------
            --------------- ---------------<o:p></o:p></span></p>
        <p class="MsoNormal"><span
            style="font-size:9.0pt;font-family:"Lucida
            Console"">Total                   137711.70      
            137713.23       275424.92</span><o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p class="MsoNormal">My two cents,<o:p></o:p></p>
        <p class="MsoNormal">-Kums<o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <div>
          <p class="MsoNormal">Kumaran Rajaram<o:p></o:p></p>
          <p class="MsoNormal"><img style="width:1.4062in;height:.5in"
              id="Picture_x0020_1"
              src="cid:part1.qVCz0llA.x6R0wOzv@kit.edu" class=""
              width="135" height="48"><o:p></o:p></p>
        </div>
        <p class="MsoNormal"><o:p> </o:p></p>
        <div>
          <div style="border:none;border-top:solid #E1E1E1
            1.0pt;padding:3.0pt 0in 0in 0in">
            <p class="MsoNormal"><b>From:</b>
              <a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss-bounces@spectrumscale.org">gpfsug-discuss-bounces@spectrumscale.org</a>
              <a class="moz-txt-link-rfc2396E" href="mailto:gpfsug-discuss-bounces@spectrumscale.org"><gpfsug-discuss-bounces@spectrumscale.org></a>
              <b>On Behalf Of </b>Uwe Falke<br>
              <b>Sent:</b> Wednesday, February 23, 2022 8:04 PM<br>
              <b>To:</b> <a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss@spectrumscale.org">gpfsug-discuss@spectrumscale.org</a><br>
              <b>Subject:</b> Re: [gpfsug-discuss] IO sizes<o:p></o:p></p>
          </div>
        </div>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p>Hi, <o:p></o:p></p>
        <p>the test bench is gpfsperf running on up to 12 clients with
          1...64 threads doing sequential reads and writes , file size
          per gpfsperf process is 12TB (with 6TB I saw caching effects
          in particular for large thread numbers ...)
          <o:p></o:p></p>
        <p>As I wrote initially: GPFS is issuing nothing but 8MiB IOs to
          the data disks, as expected in that case.
          <o:p></o:p></p>
        <p>Interesting thing though: <o:p></o:p></p>
        <p>I have rebooted the suspicious node. Now, it does not issue
          smaller IOs than the others, but -- unbelievable -- larger
          ones (up to about 4.7MiB). This is still harmful as also that
          size is incompatible with full stripe writes on the storage (
          8+2 disk groups, i.e. logically RAID6)<o:p></o:p></p>
        <p>Currently, I draw this information from the storage boxes; I
          have not yet checked iostat data for that benchmark test after
          the reboot (before, when IO sizes were smaller, we saw that
          both in iostat and in the perf data retrieved from the storage
          controllers).<o:p></o:p></p>
        <p><o:p> </o:p></p>
        <p>And: we have a separate data pool , hence dataOnly NSDs, I am
          just talking about these ...
          <o:p></o:p></p>
        <p><o:p> </o:p></p>
        <p>As for "Are you sure that Linux OS is configured the same on
          all 4 NSD servers?." - of course there are not two boxes
          identical in the world. I have actually not installed those
          machines, and, yes, i also considered reinstalling them (or at
          least the disturbing one).<o:p></o:p></p>
        <p>However, I do not have reason to assume or expect a
          difference, the supplier has just implemented these systems 
          recently from scratch.
          <o:p></o:p></p>
        <p><o:p> </o:p></p>
        <p>In the current situation (i.e. with IOs bit larger than 4MiB)
          setting max_sectors_kB to 4096 might do the trick, but as I do
          not know the cause for that behaviour it might well start to
          issue IOs smaller than 4MiB again at some point, so that is
          not a nice solution.<o:p></o:p></p>
        <p><o:p> </o:p></p>
        <p>Thanks<o:p></o:p></p>
        <p>Uwe<o:p></o:p></p>
        <p><o:p> </o:p></p>
        <div>
          <p class="MsoNormal">On 23.02.22 22:20, Andrew Beattie wrote:<o:p></o:p></p>
        </div>
        <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
          <p class="MsoNormal">Alex, <o:p></o:p></p>
          <div>
            <p class="MsoNormal"><o:p> </o:p></p>
          </div>
          <div>
            <p class="MsoNormal">Metadata will be 4Kib <o:p></o:p></p>
          </div>
          <div>
            <p class="MsoNormal"><o:p> </o:p></p>
          </div>
          <div>
            <p class="MsoNormal">Depending on the filesystem version you
              will also have subblocks to consider V4 filesystems have
              1/32 subblocks, V5 filesystems have 1/1024 subblocks
              (assuming metadata and data block size is the same)<o:p></o:p></p>
          </div>
          <div>
            <p class="MsoNormal"><br>
              My first question would be is “ Are you sure that Linux OS
              is configured the same on all 4 NSD servers?.<o:p></o:p></p>
          </div>
          <div>
            <p class="MsoNormal"><o:p> </o:p></p>
          </div>
          <div>
            <p class="MsoNormal">My second question would be do you know
              what your average file size is if most of your files are
              smaller than your filesystem block size, then you are
              always going to be performing writes using groups of
              subblocks rather than a full block writes.<o:p></o:p></p>
          </div>
          <div>
            <p class="MsoNormal"><o:p> </o:p></p>
          </div>
          <div>
            <p class="MsoNormal">Regards, <o:p></o:p></p>
          </div>
          <div>
            <p class="MsoNormal"><o:p> </o:p></p>
          </div>
          <div>
            <p class="MsoNormal">Andrew<o:p></o:p></p>
            <div>
              <p class="MsoNormal"><o:p> </o:p></p>
            </div>
            <div>
              <p class="MsoNormal"><br>
                <br>
                <o:p></o:p></p>
              <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
                <p class="MsoNormal" style="margin-bottom:12.0pt">On 24
                  Feb 2022, at 04:39, Alex Chekholko
                  <a href="mailto:alex@calicolabs.com"
                    moz-do-not-send="true"><alex@calicolabs.com></a>
                  wrote:<o:p></o:p></p>
              </blockquote>
            </div>
            <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
              <div>
                <p class="MsoNormal"> <span class="pfptpreheader1"><span
                      style="font-size:1.0pt;color:white">Hi, Metadata
                      I/Os will always be smaller than the usual data
                      block size, right? Which version of GPFS? Regards,
                      Alex On Wed, Feb 23, 2022 at 10:26 AM Uwe Falke
                      <a href="mailto:uwe.falke@kit.edu"
                        moz-do-not-send="true"><uwe.falke@kit.edu></a>
                      wrote: Dear all, sorry for asking a question which
                      seems
                    </span></span><span
                    style="font-size:1.0pt;color:white">ZjQcmQRYFpfptBannerStart</span>
                  <o:p></o:p></p>
                <table class="MsoNormalTable"
                  style="width:100.0%;border-radius:4px" width="100%"
                  cellspacing="0" cellpadding="0" border="0">
                  <tbody>
                    <tr>
                      <td style="padding:2.25pt 0in 12.0pt 0in">
                        <table class="MsoNormalTable"
                          style="width:100.0%;background:#D0D8DC;border:none;border-top:solid
                          #90A4AE 3.0pt" width="100%" cellspacing="0"
                          cellpadding="0" border="1">
                          <tbody>
                            <tr>
                              <td style="border:none;padding:0in 7.5pt
                                3.75pt 4.5pt" valign="top">
                                <table class="MsoNormalTable"
                                  cellspacing="0" cellpadding="0"
                                  border="0" align="left">
                                  <tbody>
                                    <tr>
                                      <td style="padding:3.0pt 6.0pt
                                        3.0pt 6.0pt">
                                        <p class="MsoNormal"><span
                                            class="pfpttitlemso1"><span
                                              style="font-size:10.5pt">This
                                              Message Is From an
                                              External Sender
                                            </span></span><o:p></o:p></p>
                                      </td>
                                    </tr>
                                    <tr>
                                      <td style="padding:3.0pt 6.0pt
                                        3.0pt 6.0pt">
                                        <p class="MsoNormal"><span
                                            class="pfptsubtitlemso1"><span
style="font-size:9.0pt;color:black">This message came from outside your
                                              organization.
                                            </span></span><o:p></o:p></p>
                                      </td>
                                    </tr>
                                  </tbody>
                                </table>
                              </td>
                            </tr>
                          </tbody>
                        </table>
                      </td>
                    </tr>
                  </tbody>
                </table>
                <p class="MsoNormal"><span
                    style="font-size:1.0pt;color:white">ZjQcmQRYFpfptBannerEnd<br>
                    <br>
                  </span><o:p></o:p></p>
                <div>
                  <p class="MsoNormal">Hi, <o:p></o:p></p>
                  <div>
                    <p class="MsoNormal"><o:p> </o:p></p>
                  </div>
                  <div>
                    <p class="MsoNormal">Metadata I/Os will always be
                      smaller than the usual data block size, right?<o:p></o:p></p>
                  </div>
                  <div>
                    <p class="MsoNormal">Which version of GPFS?<o:p></o:p></p>
                  </div>
                  <div>
                    <p class="MsoNormal"><o:p> </o:p></p>
                  </div>
                  <div>
                    <p class="MsoNormal">Regards,<o:p></o:p></p>
                  </div>
                  <div>
                    <p class="MsoNormal">Alex<o:p></o:p></p>
                  </div>
                </div>
                <p class="MsoNormal"><o:p> </o:p></p>
                <div>
                  <div>
                    <p class="MsoNormal">On Wed, Feb 23, 2022 at 10:26
                      AM Uwe Falke <<a
                        href="mailto:uwe.falke@kit.edu"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">uwe.falke@kit.edu</a>>
                      wrote:<o:p></o:p></p>
                  </div>
                  <blockquote style="border:none;border-left:solid
                    #CCCCCC 1.0pt;padding:0in 0in 0in
                    6.0pt;margin-left:4.8pt;margin-right:0in">
                    <p class="MsoNormal">Dear all,<br>
                      <br>
                      sorry for asking a question which seems not
                      directly GPFS related:<br>
                      <br>
                      In a setup with 4 NSD servers (old-style, with
                      storage controllers in <br>
                      the back end), 12 clients and 10 Seagate storage
                      systems, I do see in <br>
                      benchmark tests that  just one of the NSD servers
                      does send smaller IO <br>
                      requests to the storage  than the other 3 (that
                      is, both reads and <br>
                      writes are smaller).<br>
                      <br>
                      The NSD servers form 2 pairs, each pair is
                      connected to 5 seagate boxes <br>
                      ( one server to the controllers A, the other one
                      to controllers B of the <br>
                      Seagates, resp.).<br>
                      <br>
                      All 4 NSD servers are set up similarly:<br>
                      <br>
                      kernel: 3.10.0-1160.el7.x86_64 #1 SMP<br>
                      <br>
                      HBA: Broadcom / LSI Fusion-MPT 12GSAS/PCIe Secure
                      SAS38xx<br>
                      <br>
                      driver : mpt3sas 31.100.01.00<br>
                      <br>
                      max_sectors_kb=8192 (max_hw_sectors_kb=16383 , not
                      16384, as limited by <br>
                      mpt3sas) for all sd devices and all multipath (dm)
                      devices built on top.<br>
                      <br>
                      scheduler: deadline<br>
                      <br>
                      multipath (actually we do have 3 paths to each
                      volume, so there is some <br>
                      asymmetry, but that should not affect the IOs,
                      shouldn't it?, and if it <br>
                      did we would see the same effect in both pairs of
                      NSD servers, but we do <br>
                      not).<br>
                      <br>
                      All 4 storage systems are also configured the same
                      way (2 disk groups / <br>
                      pools / declustered arrays, one managed by  ctrl
                      A, one by ctrl B,  and <br>
                      8 volumes out of each; makes altogether 2 x 8 x 10
                      = 160 NSDs).<br>
                      <br>
                      <br>
                      GPFS BS is 8MiB , according to iohistory (mmdiag)
                      we do see clean IO <br>
                      requests of 16384 disk blocks (i.e. 8192kiB) from
                      GPFS.<br>
                      <br>
                      The first question I have - but that is not my
                      main one: I do see, both <br>
                      in iostat and on the storage systems, that the
                      default IO requests are <br>
                      about 4MiB, not 8MiB as I'd expect from above
                      settings (max_sectors_kb <br>
                      is really in terms of kiB, not sectors, cf. <br>
                      <a
href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.kernel.org%2Fdoc%2FDocumentation%2Fblock%2Fqueue-sysfs.txt&data=04%7C01%7Ckrajaram%40geocomputing.net%7C52cc6360e6ea4be737ba08d9f7317d78%7C229a2792a5064f25b3bdbab585cec3ed%7C0%7C0%7C637812615096678246%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=II8k%2FHzrU7BC%2FVejg9AujgZGk1E0XTz8QCpH6IE6RGM%3D&reserved=0"
                        target="_blank" moz-do-not-send="true">https://www.kernel.org/doc/Documentation/block/queue-sysfs.txt</a>).<br>
                      <br>
                      But what puzzles me even more: one of the server
                      compiles IOs even <br>
                      smaller, varying between 3.2MiB and 3.6MiB mostly
                      - both for reads and <br>
                      writes ... I just cannot see why.<br>
                      <br>
                      I have to suspect that this will (in writing to
                      the storage) cause <br>
                      incomplete stripe writes on our erasure-coded
                      volumes (8+2p)(as long as <br>
                      the controller is not able to re-coalesce the data
                      properly; and it <br>
                      seems it cannot do it completely at least)<br>
                      <br>
                      <br>
                      If someone of you has seen that already and/or
                      knows a potential <br>
                      explanation I'd be glad to learn about.<br>
                      <br>
                      <br>
                      And if some of you wonder: yes, I (was) moved away
                      from IBM and am now <br>
                      at KIT.<br>
                      <br>
                      Many thanks in advance<br>
                      <br>
                      Uwe<br>
                      <br>
                      <br>
                      -- <br>
                      Karlsruhe Institute of Technology (KIT)<br>
                      Steinbuch Centre for Computing (SCC)<br>
                      Scientific Data Management (SDM)<br>
                      <br>
                      Uwe Falke<br>
                      <br>
                      Hermann-von-Helmholtz-Platz 1, Building 442, Room
                      187<br>
                      D-76344 Eggenstein-Leopoldshafen<br>
                      <br>
                      Tel: +49 721 608 28024<br>
                      Email: <a href="mailto:uwe.falke@kit.edu"
                        target="_blank" moz-do-not-send="true"
                        class="moz-txt-link-freetext">uwe.falke@kit.edu</a><br>
                      <a
href="https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.scc.kit.edu%2F&data=04%7C01%7Ckrajaram%40geocomputing.net%7C52cc6360e6ea4be737ba08d9f7317d78%7C229a2792a5064f25b3bdbab585cec3ed%7C0%7C0%7C637812615096678246%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=mXwzkLB1EFB1Dh31rVRMwJZBY4CBbHcJduc9gK6M71A%3D&reserved=0"
                        target="_blank" moz-do-not-send="true">www.scc.kit.edu</a><br>
                      <br>
                      Registered office:<br>
                      Kaiserstraße 12, 76131 Karlsruhe, Germany<br>
                      <br>
                      KIT – The Research University in the Helmholtz
                      Association<br>
                      <br>
                      _______________________________________________<br>
                      gpfsug-discuss mailing list<br>
                      gpfsug-discuss at <a
href="https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fspectrumscale.org%2F&data=04%7C01%7Ckrajaram%40geocomputing.net%7C52cc6360e6ea4be737ba08d9f7317d78%7C229a2792a5064f25b3bdbab585cec3ed%7C0%7C0%7C637812615096678246%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=G6bjUWlzkKzR2ptGcLffwD8qF2IT9vkruoevFoTwNE0%3D&reserved=0"
                        target="_blank" moz-do-not-send="true">
                        spectrumscale.org</a><br>
                      <a
href="https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=04%7C01%7Ckrajaram%40geocomputing.net%7C52cc6360e6ea4be737ba08d9f7317d78%7C229a2792a5064f25b3bdbab585cec3ed%7C0%7C0%7C637812615096678246%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=G9l4PMuzdNA%2BwtAtWK%2BApoXxvKn5jZKeP%2FENOVc9xXg%3D&reserved=0"
                        target="_blank" moz-do-not-send="true">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><o:p></o:p></p>
                  </blockquote>
                </div>
              </div>
            </blockquote>
          </div>
          <p class="MsoNormal"><br>
            <br>
            <br>
            <br>
            <o:p></o:p></p>
          <pre>_______________________________________________<o:p></o:p></pre>
          <pre>gpfsug-discuss mailing list<o:p></o:p></pre>
          <pre>gpfsug-discuss at spectrumscale.org<o:p></o:p></pre>
          <pre><a href="https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=04%7C01%7Ckrajaram%40geocomputing.net%7C52cc6360e6ea4be737ba08d9f7317d78%7C229a2792a5064f25b3bdbab585cec3ed%7C0%7C0%7C637812615096678246%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=G9l4PMuzdNA%2BwtAtWK%2BApoXxvKn5jZKeP%2FENOVc9xXg%3D&reserved=0" moz-do-not-send="true">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><o:p></o:p></pre>
        </blockquote>
        <pre>-- <o:p></o:p></pre>
        <pre>Karlsruhe Institute of Technology (KIT)<o:p></o:p></pre>
        <pre>Steinbuch Centre for Computing (SCC)<o:p></o:p></pre>
        <pre>Scientific Data Management (SDM)<o:p></o:p></pre>
        <pre><o:p> </o:p></pre>
        <pre>Uwe Falke<o:p></o:p></pre>
        <pre><o:p> </o:p></pre>
        <pre>Hermann-von-Helmholtz-Platz 1, Building 442, Room 187<o:p></o:p></pre>
        <pre>D-76344 Eggenstein-Leopoldshafen<o:p></o:p></pre>
        <pre><o:p> </o:p></pre>
        <pre>Tel: +49 721 608 28024<o:p></o:p></pre>
        <pre>Email: <a href="mailto:uwe.falke@kit.edu" moz-do-not-send="true" class="moz-txt-link-freetext">uwe.falke@kit.edu</a><o:p></o:p></pre>
        <pre><a href="https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.scc.kit.edu%2F&data=04%7C01%7Ckrajaram%40geocomputing.net%7C52cc6360e6ea4be737ba08d9f7317d78%7C229a2792a5064f25b3bdbab585cec3ed%7C0%7C0%7C637812615096678246%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=mXwzkLB1EFB1Dh31rVRMwJZBY4CBbHcJduc9gK6M71A%3D&reserved=0" moz-do-not-send="true">www.scc.kit.edu</a><o:p></o:p></pre>
        <pre><o:p> </o:p></pre>
        <pre>Registered office:<o:p></o:p></pre>
        <pre>Kaiserstraße 12, 76131 Karlsruhe, Germany<o:p></o:p></pre>
        <pre><o:p> </o:p></pre>
        <pre>KIT – The Research University in the Helmholtz Association <o:p></o:p></pre>
      </div>
      <br>
      <fieldset class="moz-mime-attachment-header"></fieldset>
      <pre class="moz-quote-pre" wrap="">_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
<a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
</pre>
    </blockquote>
    <pre class="moz-signature" cols="72">-- 
Karlsruhe Institute of Technology (KIT)
Steinbuch Centre for Computing (SCC)
Scientific Data Management (SDM)

Uwe Falke

Hermann-von-Helmholtz-Platz 1, Building 442, Room 187
D-76344 Eggenstein-Leopoldshafen

Tel: +49 721 608 28024
Email: <a class="moz-txt-link-abbreviated" href="mailto:uwe.falke@kit.edu">uwe.falke@kit.edu</a>
<a class="moz-txt-link-abbreviated" href="http://www.scc.kit.edu">www.scc.kit.edu</a>

Registered office:
Kaiserstraße 12, 76131 Karlsruhe, Germany

KIT – The Research University in the Helmholtz Association 
</pre>
  </body>
</html>