<span style=" font-size:10pt;font-family:sans-serif">I guess that particular
table is not the whole truth, nor a specification, nor a promise, but a
simplified summary of what you get when there is just one block size that
applies to both meta-data and data-data.  </span><br><br><span style=" font-size:10pt;font-family:sans-serif">You have discovered
that it does not apply to systems where metadata has a different blocksize
than data-data.  </span><br><br><span style=" font-size:10pt;font-family:sans-serif">My guesstimate
(speculation!) is that the deployed code chooses one subblocks-per-full-block
parameter and applies that to both. Which would explain the results we're
seeing.  Further is seems the the mmlsfs command assumes at least
in some places that there is only one subblocks-per-block parameter...</span><br><span style=" font-size:10pt;font-family:sans-serif">Looking deeper
into code, is another story for another day -- but I'll say that there
seems to be sufficient flexibility that if this were deemed a burning issue,
there could be futher "enhancements..."  ;-)<br></span><br><br><br><br><span style=" font-size:9pt;color:#5f5f5f;font-family:sans-serif">From:
       </span><span style=" font-size:9pt;font-family:sans-serif">"Buterbaugh,
Kevin L" <Kevin.Buterbaugh@Vanderbilt.Edu></span><br><span style=" font-size:9pt;color:#5f5f5f;font-family:sans-serif">To:
       </span><span style=" font-size:9pt;font-family:sans-serif">gpfsug
main discussion list <gpfsug-discuss@spectrumscale.org></span><br><span style=" font-size:9pt;color:#5f5f5f;font-family:sans-serif">Date:
       </span><span style=" font-size:9pt;font-family:sans-serif">08/01/2018
02:24 PM</span><br><span style=" font-size:9pt;color:#5f5f5f;font-family:sans-serif">Subject:
       </span><span style=" font-size:9pt;font-family:sans-serif">Re:
[gpfsug-discuss] Sub-block size wrong on GPFS 5 filesystem?</span><br><span style=" font-size:9pt;color:#5f5f5f;font-family:sans-serif">Sent
by:        </span><span style=" font-size:9pt;font-family:sans-serif">gpfsug-discuss-bounces@spectrumscale.org</span><br><hr noshade><br><br><br><span style=" font-size:12pt">Hi Marc, </span><br><br><span style=" font-size:12pt">Thanks for the response $B!D(B I understand
what you$B!G(Bre saying, but since I$B!G(Bm asking for a 1 MB block size for metadata
and a 4 MB block size for data and according to the chart in the mmcrfs
man page both result in an 8 KB sub block size I$B!G(Bm still confused as to
why I$B!G(Bve got a 32 KB sub block size for my non-system (i.e. data) pools?
 Especially when you consider that 32 KB isn$B!G(Bt the default even if
I had chosen an 8 or 16 MB block size!</span><br><br><span style=" font-size:12pt">Kevin</span><br><br><span style=" font-size:12pt">—</span><br><span style=" font-size:12pt">Kevin Buterbaugh - Senior System Administrator</span><br><span style=" font-size:12pt">Vanderbilt University - Advanced Computing
Center for Research and Education</span><br><a href="mailto:Kevin.Buterbaugh@vanderbilt.edu"><span style=" font-size:12pt;color:blue"><u>Kevin.Buterbaugh@vanderbilt.edu</u></span></a><span style=" font-size:12pt">- (615)875-9633</span><br><br><span style=" font-size:12pt">On Aug 1, 2018, at 12:21 PM, Marc A Kaplan
<</span><a href="mailto:makaplan@us.ibm.com"><span style=" font-size:12pt;color:blue"><u>makaplan@us.ibm.com</u></span></a><span style=" font-size:12pt">>
wrote:</span><br><br><span style=" font-size:10pt;font-family:sans-serif">I haven't looked
into all the details but here's a clue -- notice there is only one "subblocks-per-full-block"
parameter.  </span><span style=" font-size:12pt"><br></span><span style=" font-size:10pt;font-family:sans-serif"><br>And it is the same for both metadata blocks and datadata blocks.<br><br>So maybe (MAYBE) that is a constraint somewhere...</span><span style=" font-size:12pt"><br></span><span style=" font-size:10pt;font-family:sans-serif"><br>Certainly, in the currently supported code, that's what you get.</span><span style=" font-size:12pt"><br></span><span style=" font-size:9pt;color:#5f5f5f;font-family:sans-serif"><br>From:        </span><span style=" font-size:9pt;font-family:sans-serif">"Buterbaugh,
Kevin L" <</span><a href="mailto:Kevin.Buterbaugh@Vanderbilt.Edu"><span style=" font-size:9pt;color:blue;font-family:sans-serif"><u>Kevin.Buterbaugh@Vanderbilt.Edu</u></span></a><span style=" font-size:9pt;font-family:sans-serif">></span><span style=" font-size:9pt;color:#5f5f5f;font-family:sans-serif"><br>To:        </span><span style=" font-size:9pt;font-family:sans-serif">gpfsug
main discussion list <</span><a href="mailto:gpfsug-discuss@spectrumscale.org"><span style=" font-size:9pt;color:blue;font-family:sans-serif"><u>gpfsug-discuss@spectrumscale.org</u></span></a><span style=" font-size:9pt;font-family:sans-serif">></span><span style=" font-size:9pt;color:#5f5f5f;font-family:sans-serif"><br>Date:        </span><span style=" font-size:9pt;font-family:sans-serif">08/01/2018
12:55 PM</span><span style=" font-size:9pt;color:#5f5f5f;font-family:sans-serif"><br>Subject:        </span><span style=" font-size:9pt;font-family:sans-serif">[gpfsug-discuss]
Sub-block size wrong on GPFS 5 filesystem?</span><span style=" font-size:9pt;color:#5f5f5f;font-family:sans-serif"><br>Sent by:        </span><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org"><span style=" font-size:9pt;color:blue;font-family:sans-serif"><u>gpfsug-discuss-bounces@spectrumscale.org</u></span></a><span style=" font-size:12pt"><br></span><hr noshade><span style=" font-size:12pt"><br><br><br>Hi All, <br><br>Our production cluster is still on GPFS 4.2.3.x, but in preparation for
moving to GPFS 5 I have upgraded our small (7 node) test cluster to GPFS
5.0.1-1.  I am setting up a new filesystem there using hardware that
we recently life-cycled out of our production environment.<br><br>I $B!H(Bsuccessfully$B!I(B created a filesystem but I believe the sub-block size
is wrong.  I$B!G(Bm using a 4 MB filesystem block size, so according to
the mmcrfs man page the sub-block size should be 8K:<br><br>         Table 1. Block sizes and subblock sizes<br><br>+$B!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>(B+$B!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>(B+<br>| Block size                  
 | Subblock size              
  |<br>+$B!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>(B+$B!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>(B+<br>| 64 KiB                  
     | 2 KiB              
          |<br>+$B!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>(B+$B!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>(B+<br>| 128 KiB                  
    | 4 KiB              
          |<br>+$B!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>(B+$B!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>(B+<br>| 256 KiB, 512 KiB, 1 MiB, 2    | 8 KiB      
                  |<br>| MiB, 4 MiB                  
 |                  
            |<br>+$B!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>(B+$B!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>(B+<br>| 8 MiB, 16 MiB                
| 16 KiB                  
     |<br>+$B!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>(B+$B!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>!>(B+<br><br>However, it appears that it$B!G(Bs 8K for the system pool but 32K for the other
pools:<br><br>flag                value  
                 description<br>------------------- ------------------------ -----------------------------------<br> -f                 8192  
                  Minimum
fragment (subblock) size in bytes (system pool)<br>                    32768
                   Minimum
fragment (subblock) size in bytes (other pools)<br> -i                 4096  
                  Inode size
in bytes<br> -I                 32768  
                 Indirect
block size in bytes<br> -m                 2    
                   Default
number of metadata replicas<br> -M                 3    
                   Maximum
number of metadata replicas<br> -r                 1    
                   Default
number of data replicas<br> -R                 3    
                   Maximum
number of data replicas<br> -j                 scatter  
               Block allocation
type<br> -D                 nfs4  
                  File locking
semantics in effect<br> -k                 all  
                   ACL
semantics in effect<br> -n                 32    
                  Estimated
number of nodes that will mount file system<br> -B                 1048576  
               Block size (system
pool)<br>                    4194304
                 Block size
(other pools)<br> -Q                 user;group;fileset
      Quotas accounting enabled<br>                    user;group;fileset
      Quotas enforced<br>                    none
                    Default
quotas enabled<br> --perfileset-quota No              
        Per-fileset quota enforcement<br> --filesetdf        No        
              Fileset df enabled?<br> -V                 19.01 (5.0.1.0)
         File system version<br> --create-time      Wed Aug  1 11:39:39 2018 File system
creation time<br> -z                 No    
                  Is DMAPI
enabled?<br> -L                 33554432  
              Logfile size<br> -E                 Yes  
                   Exact
mtime mount option<br> -S                 relatime  
              Suppress atime mount option<br> -K                 whenpossible
            Strict replica allocation option<br> --fastea           Yes        
             Fast external attributes
enabled?<br> --encryption       No          
            Encryption enabled?<br> --inode-limit      101095424        
       Maximum number of inodes<br> --log-replicas     0            
           Number of log replicas<br> --is4KAligned      Yes          
           is4KAligned?<br> --rapid-repair     Yes            
         rapidRepair enabled?<br> --write-cache-threshold 0              
    HAWC Threshold (max 65536)<br> --subblocks-per-full-block 128            
 Number of subblocks per full block<br> -P                 system;raid1;raid6
      Disk storage pools in file system<br> --file-audit-log   No              
        File Audit Logging enabled?<br> --maintenance-mode No              
        Maintenance Mode enabled?<br> -d                 test21A3nsd;test21A4nsd;test21B3nsd;test21B4nsd;test23Ansd;test23Bnsd;test23Cnsd;test24Ansd;test24Bnsd;test24Cnsd;test25Ansd;test25Bnsd;test25Cnsd
 Disks in file system<br> -A                 yes  
                   Automatic
mount option<br> -o                 none  
                  Additional
mount options<br> -T                 /gpfs5  
                Default mount point<br> --mount-priority   0              
         Mount priority<br><br>Output of mmcrfs:<br><br>mmcrfs gpfs5 -F ~/gpfs/gpfs5.stanza -A yes -B 4M -E yes -i 4096 -j scatter
-k all -K whenpossible -m 2 -M 3 -n 32 -Q yes -r 1 -R 3 -T /gpfs5 -v yes
--nofilesetdf --metadata-block-size 1M<br><br>The following disks of gpfs5 will be formatted on node testnsd3:<br>    test21A3nsd: size 953609 MB<br>    test21A4nsd: size 953609 MB<br>    test21B3nsd: size 953609 MB<br>    test21B4nsd: size 953609 MB<br>    test23Ansd: size 15259744 MB<br>    test23Bnsd: size 15259744 MB<br>    test23Cnsd: size 1907468 MB<br>    test24Ansd: size 15259744 MB<br>    test24Bnsd: size 15259744 MB<br>    test24Cnsd: size 1907468 MB<br>    test25Ansd: size 15259744 MB<br>    test25Bnsd: size 15259744 MB<br>    test25Cnsd: size 1907468 MB<br>Formatting file system ...<br>Disks up to size 8.29 TB can be added to storage pool system.<br>Disks up to size 16.60 TB can be added to storage pool raid1.<br>Disks up to size 132.62 TB can be added to storage pool raid6.<br>Creating Inode File<br>   8 % complete on Wed Aug  1 11:39:19 2018<br>  18 % complete on Wed Aug  1 11:39:24 2018<br>  27 % complete on Wed Aug  1 11:39:29 2018<br>  37 % complete on Wed Aug  1 11:39:34 2018<br>  48 % complete on Wed Aug  1 11:39:39 2018<br>  60 % complete on Wed Aug  1 11:39:44 2018<br>  72 % complete on Wed Aug  1 11:39:49 2018<br>  83 % complete on Wed Aug  1 11:39:54 2018<br>  95 % complete on Wed Aug  1 11:39:59 2018<br> 100 % complete on Wed Aug  1 11:40:01 2018<br>Creating Allocation Maps<br>Creating Log Files<br>   3 % complete on Wed Aug  1 11:40:07 2018<br>  28 % complete on Wed Aug  1 11:40:14 2018<br>  53 % complete on Wed Aug  1 11:40:19 2018<br>  78 % complete on Wed Aug  1 11:40:24 2018<br> 100 % complete on Wed Aug  1 11:40:25 2018<br>Clearing Inode Allocation Map<br>Clearing Block Allocation Map<br>Formatting Allocation Map for storage pool system<br>  85 % complete on Wed Aug  1 11:40:32 2018<br> 100 % complete on Wed Aug  1 11:40:33 2018<br>Formatting Allocation Map for storage pool raid1<br>  53 % complete on Wed Aug  1 11:40:38 2018<br> 100 % complete on Wed Aug  1 11:40:42 2018<br>Formatting Allocation Map for storage pool raid6<br>  20 % complete on Wed Aug  1 11:40:47 2018<br>  39 % complete on Wed Aug  1 11:40:52 2018<br>  60 % complete on Wed Aug  1 11:40:57 2018<br>  79 % complete on Wed Aug  1 11:41:02 2018<br> 100 % complete on Wed Aug  1 11:41:08 2018<br>Completed creation of file system /dev/gpfs5.<br>mmcrfs: Propagating the cluster configuration data to all<br>  affected nodes.  This is an asynchronous process.<br><br>And contents of stanza file:<br><br>%nsd:<br>  nsd=test21A3nsd<br>  usage=metadataOnly<br>  failureGroup=210<br>  pool=system<br>  servers=testnsd3,testnsd1,testnsd2<br>  device=dm-15<br><br>%nsd:<br>  nsd=test21A4nsd<br>  usage=metadataOnly<br>  failureGroup=210<br>  pool=system<br>  servers=testnsd1,testnsd2,testnsd3<br>  device=dm-14<br><br>%nsd:<br>  nsd=test21B3nsd<br>  usage=metadataOnly<br>  failureGroup=211<br>  pool=system<br>  servers=testnsd1,testnsd2,testnsd3<br>  device=dm-17<br><br>%nsd:<br>  nsd=test21B4nsd<br>  usage=metadataOnly<br>  failureGroup=211<br>  pool=system<br>  servers=testnsd2,testnsd3,testnsd1<br>  device=dm-16<br><br>%nsd:<br>  nsd=test23Ansd<br>  usage=dataOnly<br>  failureGroup=23<br>  pool=raid6<br>  servers=testnsd2,testnsd3,testnsd1<br>  device=dm-10<br><br>%nsd:<br>  nsd=test23Bnsd<br>  usage=dataOnly<br>  failureGroup=23<br>  pool=raid6<br>  servers=testnsd3,testnsd1,testnsd2<br>  device=dm-9<br><br>%nsd:<br>  nsd=test23Cnsd<br>  usage=dataOnly<br>  failureGroup=23<br>  pool=raid1<br>  servers=testnsd1,testnsd2,testnsd3<br>  device=dm-5<br><br>%nsd:<br>  nsd=test24Ansd<br>  usage=dataOnly<br>  failureGroup=24<br>  pool=raid6<br>  servers=testnsd3,testnsd1,testnsd2<br>  device=dm-6<br><br>%nsd:<br>  nsd=test24Bnsd<br>  usage=dataOnly<br>  failureGroup=24<br>  pool=raid6<br>  servers=testnsd1,testnsd2,testnsd3<br>  device=dm-0<br><br>%nsd:<br>  nsd=test24Cnsd<br>  usage=dataOnly<br>  failureGroup=24<br>  pool=raid1<br>  servers=testnsd2,testnsd3,testnsd1<br>  device=dm-2<br><br>%nsd:<br>  nsd=test25Ansd<br>  usage=dataOnly<br>  failureGroup=25<br>  pool=raid6<br>  servers=testnsd1,testnsd2,testnsd3<br>  device=dm-6<br><br>%nsd:<br>  nsd=test25Bnsd<br>  usage=dataOnly<br>  failureGroup=25<br>  pool=raid6<br>  servers=testnsd2,testnsd3,testnsd1<br>  device=dm-6<br><br>%nsd:<br>  nsd=test25Cnsd<br>  usage=dataOnly<br>  failureGroup=25<br>  pool=raid1<br>  servers=testnsd3,testnsd1,testnsd2<br>  device=dm-3<br><br>%pool:<br>  pool=system<br>  blockSize=1M<br>  usage=metadataOnly<br>  layoutMap=scatter<br>  allowWriteAffinity=no<br><br>%pool:<br>  pool=raid6<br>  blockSize=4M<br>  usage=dataOnly<br>  layoutMap=scatter<br>  allowWriteAffinity=no<br><br>%pool:<br>  pool=raid1<br>  blockSize=4M<br>  usage=dataOnly<br>  layoutMap=scatter<br>  allowWriteAffinity=no<br><br>What am I missing or what have I done wrong?  Thanks$B!D(B<br><br>Kevin<br>—<br>Kevin Buterbaugh - Senior System Administrator<br>Vanderbilt University - Advanced Computing Center for Research and Education</span><span style=" font-size:12pt;color:blue"><u><br></u></span><a href="mailto:Kevin.Buterbaugh@vanderbilt.edu"><span style=" font-size:12pt;color:blue"><u>Kevin.Buterbaugh@vanderbilt.edu</u></span></a><span style=" font-size:12pt">-
(615)875-9633<br><br></span><tt><span style=" font-size:10pt"><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </span></tt><a href="http://spectrumscale.org"><tt><span style=" font-size:10pt;color:blue"><u>spectrumscale.org</u></span></tt></a><span style=" font-size:12pt;color:blue"><u><br></u></span><a href="https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cd84fdde05c65406d4d9008d5f7d32f0f%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636687408760525035&sdata=YIa8UdCmnzKTXU4cWYHxn8Sn%2Fhmnkcb8e0sCnPXD8C4%3D&reserved=0"><tt><span style=" font-size:10pt;color:blue"><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></span></tt></a><span style=" font-size:12pt"><br><br><br><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </span><a href="http://spectrumscale.org"><span style=" font-size:12pt;color:blue"><u>spectrumscale.org</u></span></a><span style=" font-size:12pt;color:blue"><u><br></u></span><a href="https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cd84fdde05c65406d4d9008d5f7d32f0f%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636687408760535040&sdata=hqVZVIQLbxakARTspzbSkMZBHi2b6%2BIcrPLU1atNbus%3D&reserved=0"><span style=" font-size:12pt;color:blue"><u>https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&amp;data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cd84fdde05c65406d4d9008d5f7d32f0f%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636687408760535040&amp;sdata=hqVZVIQLbxakARTspzbSkMZBHi2b6%2BIcrPLU1atNbus%3D&amp;reserved=0</u></span></a><br><tt><span style=" font-size:10pt">_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></span></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><span style=" font-size:10pt">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</span></tt></a><tt><span style=" font-size:10pt"><br></span></tt><br><br><BR>