<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>Hi, it's for a traditional NSD setup.</p>
    <p>--Joey<br>
    </p>
    <br>
    <div class="moz-cite-prefix">On 6/26/18 12:21 AM, Sven Oehme wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:CALssuR0NG9mae3ZZpzXWQYBJ5hnK6x-WG+P1h9Qz650z556yyg@mail.gmail.com">
      <div dir="ltr">Joseph,
        <div><br>
        </div>
        <div>the subblocksize will be derived from the smallest
          blocksize in the filesytem, given you specified a metadata
          block size of 512k thats what will be used to calculate the
          number of subblocks, even your data pool is 4mb. </div>
        <div>is this setup for a traditional NSD Setup or for GNR as the
          recommendations would be different. </div>
        <div><br>
        </div>
        <div>sven<br>
          <br>
          <div class="gmail_quote">
            <div dir="ltr">On Tue, Jun 26, 2018 at 2:59 AM Joseph
              Mendoza <<a href="mailto:jam@ucar.edu"
                moz-do-not-send="true">jam@ucar.edu</a>> wrote:<br>
            </div>
            <blockquote class="gmail_quote" style="margin:0 0 0
              .8ex;border-left:1px #ccc solid;padding-left:1ex">Quick
              question, anyone know why GPFS wouldn't respect the
              default for<br>
              the subblocks-per-full-block parameter when creating a new
              filesystem? <br>
              I'd expect it to be set to 512 for an 8MB block size but
              my guess is<br>
              that also specifying a metadata-block-size is interfering
              with it (by<br>
              being too small).  This was a parameter recommended by the
              vendor for a<br>
              4.2 installation with metadata on dedicated SSDs in the
              system pool, any<br>
              best practices for 5.0?  I'm guessing I'd have to bump it
              up to at least<br>
              4MB to get 512 subblocks for both pools.<br>
              <br>
              fs1 created with:<br>
              # mmcrfs fs1 -F fs1_ALL -A no -B 8M -i 4096 -m 2 -M 2 -r 1
              -R 2 -j<br>
              cluster -n 9000 --metadata-block-size 512K
              --perfileset-quota<br>
              --filesetdf -S relatime -Q yes --inode-limit
              20000000:10000000 -T /gpfs/fs1<br>
              <br>
              # mmlsfs fs1<br>
              <snipped><br>
              <br>
              flag                value                    description<br>
              ------------------- ------------------------<br>
              -----------------------------------<br>
               -f                 8192                     Minimum
              fragment (subblock)<br>
              size in bytes (system pool)<br>
                                  131072                   Minimum
              fragment (subblock)<br>
              size in bytes (other pools)<br>
               -i                 4096                     Inode size in
              bytes<br>
               -I                 32768                    Indirect
              block size in bytes<br>
              <br>
               -B                 524288                   Block size
              (system pool)<br>
                                  8388608                  Block size
              (other pools)<br>
              <br>
               -V                 19.01 (5.0.1.0)          File system
              version<br>
              <br>
               --subblocks-per-full-block 64               Number of
              subblocks per<br>
              full block<br>
               -P                 system;DATA              Disk storage
              pools in file<br>
              system<br>
              <br>
              <br>
              Thanks!<br>
              --Joey Mendoza<br>
              NCAR<br>
              _______________________________________________<br>
              gpfsug-discuss mailing list<br>
              gpfsug-discuss at <a href="http://spectrumscale.org"
                rel="noreferrer" target="_blank" moz-do-not-send="true">spectrumscale.org</a><br>
              <a
                href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"
                rel="noreferrer" target="_blank" moz-do-not-send="true">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br>
            </blockquote>
          </div>
        </div>
      </div>
      <!--'"--><br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
<a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>