[gpfsug-discuss] GPFS v5: Blocksizes and subblocks

Buterbaugh, Kevin L Kevin.Buterbaugh at Vanderbilt.Edu
Wed Mar 27 14:32:46 GMT 2019


Hi All,

So I was looking at the presentation referenced below and it states - on multiple slides - that there is one system storage pool per cluster.  Really?  Shouldn’t that be one system storage pool per filesystem?!?  If not, please explain how in my GPFS cluster with two (local) filesystems I see two different system pools with two different sets of NSDs, two different capacities, and two different percentages full???

Thanks…

Kevin

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
Kevin.Buterbaugh at vanderbilt.edu<mailto:Kevin.Buterbaugh at vanderbilt.edu> - (615)875-9633

On Mar 26, 2019, at 11:27 AM, Dorigo Alvise (PSI) <alvise.dorigo at psi.ch<mailto:alvise.dorigo at psi.ch>> wrote:

Hi Marc,
"Indirect block size" is well explained in this presentation:

http://files.gpfsug.org/presentations/2016/south-bank/D2_P2_A_spectrum_scale_metadata_dark_V2a.pdf<https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Ffiles.gpfsug.org%2Fpresentations%2F2016%2Fsouth-bank%2FD2_P2_A_spectrum_scale_metadata_dark_V2a.pdf&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C5b28a9a0d39a47fd3f0608d6b208186a%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636892145179816620&sdata=SDAw4EXR1jNDYZRdIPtNVuhR1ZfBiUFcF7%2FCrprNwag%3D&reserved=0>

pages 37-41

Cheers,

   Alvise

________________________________
From: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org> [gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>] on behalf of Caubet Serrabou Marc (PSI) [marc.caubet at psi.ch<mailto:marc.caubet at psi.ch>]
Sent: Tuesday, March 26, 2019 4:39 PM
To: gpfsug main discussion list
Subject: [gpfsug-discuss] GPFS v5: Blocksizes and subblocks

Hi all,

according to several GPFS presentations as well as according to the man pages:

         Table 1. Block sizes and subblock sizes

+-------------------------------+-------------------------------+
| Block size                    | Subblock size                 |
+-------------------------------+-------------------------------+
| 64 KiB                        | 2 KiB                         |
+-------------------------------+-------------------------------+
| 128 KiB                       | 4 KiB                         |
+-------------------------------+-------------------------------+
| 256 KiB, 512 KiB, 1 MiB, 2    | 8 KiB                         |
| MiB, 4 MiB                    |                               |
+-------------------------------+-------------------------------+
| 8 MiB, 16 MiB                 | 16 KiB                        |
+-------------------------------+-------------------------------+

A block size of 8MiB or 16MiB should contain subblocks of 16KiB.

However, when creating a new filesystem with 16MiB blocksize, looks like is using 128KiB subblocks:

[root at merlindssio01 ~]# mmlsfs merlin
flag                value                    description
------------------- ------------------------ -----------------------------------
 -f                 8192                     Minimum fragment (subblock) size in bytes (system pool)
                    131072                   Minimum fragment (subblock) size in bytes (other pools)
 -i                 4096                     Inode size in bytes
 -I                 32768                    Indirect block size in bytes
.
.
.
 -n                 128                      Estimated number of nodes that will mount file system
 -B                 1048576                  Block size (system pool)
                    16777216                 Block size (other pools)
.
.
.

What am I missing? According to documentation, I expect this to be a fixed value, or it isn't at all?

On the other hand, I don't really understand the concept 'Indirect block size in bytes', can somebody clarify or provide some details about this setting?

Thanks a lot and best regards,
Marc
_________________________________________
Paul Scherrer Institut
High Performance Computing
Marc Caubet Serrabou
Building/Room: WHGA/019A
Forschungsstrasse, 111
5232 Villigen PSI
Switzerland

Telephone: +41 56 310 46 67
E-Mail: marc.caubet at psi.ch<mailto:marc.caubet at psi.ch>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org/>
https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C5b28a9a0d39a47fd3f0608d6b208186a%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636892145179836634&sdata=23F22sUiyCYEg0H3AdbkBAnhPpLVBVTh39zRr%2FLYCmc%3D&reserved=0

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20190327/02522669/attachment-0002.htm>


More information about the gpfsug-discuss mailing list