[gpfsug-discuss] Multiple Block Sizes in a Filesystem (Was: Online data migration tool)

Marc A Kaplan makaplan at us.ibm.com
Fri Jan 12 22:58:39 GMT 2018


Having multiple blocksizes in the same file system would unnecessarily 
complicate things.  Consider migrating a file from one pool to another 
with different blocksizes... How to represent the indirect blocks (lists 
of blocks allocated to the file)?  Especially consider that today, 
migration can proceed one block at a time, during migration a file is 
"mis-placed" -- has blocks spread over more than one pool.... 

The new feature that supports more than 32 sub-blocks per block - is a 
step in another direction but maybe addresses some of your concerns.... 

We do support different blocksizes for meta-data -- but meta-data is 
distinct from data and never migrates out of system pool.

--marc K.



From:   Aaron Knister <aaron.s.knister at nasa.gov>
To:     <gpfsug-discuss at spectrumscale.org>
Date:   01/08/2018 09:25 PM
Subject:        Re: [gpfsug-discuss] Multiple Block Sizes in a Filesystem 
(Was: Online data migration tool)
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Thanks, Bryan! That's a great use case I hadn't thought of.

GPFS can already support a different block size for the system pool so
in my very simplistic view of the world it's already possible (unless
there's some implementation detail about the system pool that lends
itself to a different block size from all other pools that doesn't apply
to other non-system pools differing from each other).

-Aaron

On 1/8/18 6:48 PM, Bryan Banister wrote:
> Hey Aaron... I have been talking about the same idea here and would say 
it would be a massive feature and management improvement.
> 
> I would like to have many GPFS storage pools in my file system, each 
with tuned blocksize and subblock sizes to suite the application, using 
independent filesets and the data placement policy to store the data in 
the right GPFS storage pool.  Migrating the data with the policy engine 
between these pools as you described would be a lot faster and a lot safer 
than trying to migrate files individually (like with rsync).
> 
> NSDs can only belong to one storage pool, so I don't see why the block 
allocation map would be difficult to manage in this case.
> 
> Cheers,
> -Bryan
> 
> -----Original Message-----
> From: gpfsug-discuss-bounces at spectrumscale.org [
mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Aaron 
Knister
> Sent: Monday, January 08, 2018 4:57 PM
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Subject: [gpfsug-discuss] Multiple Block Sizes in a Filesystem (Was: 
Online data migration tool)
> 
> Note: External Email
> -------------------------------------------------
> 
> I was thinking some more about the >32 subblock feature in scale 5.0. As
> mentioned by IBM the biggest advantage of that feature is on filesystems
> with large blocks (e.g. multiple MB). The majority of our filesystems
> have a block size of 1MB which got me thinking... wouldn't it be nice if
> they had a larger block size (there seem to be compelling performance
> reasons for large file I/O to do this).
> 
> I'm wondering, what's the feasibility is of supporting filesystem pools
> of varying block sizes within a single filesystem? I thought allocation
> maps exist on a per-pool basis which gives me some hope it's not too 
hard.
> 
> If one could do this then, yes, you'd still need new hardware to migrate
> to a larger block size (and >32 subblocks), but it could be done as part
> of a refresh cycle *and* (and this is the part most important to me) it
> could be driven entirely by the policy engine which means storage admins
> are largely hands off and the migration is by and large transparent to
> the end user.
> 
> This doesn't really address the need for a tool to address a filesystem
> migration to 4k metadata blocks (although I wonder if something could be
> done to create a system_4k pool that contains 4k-aligned metadata NSDs
> where key data structures get re-written during a restripe in a
> 4k-aligned manner, but that's really grasping at straws for me).
> 
> -Aaorn
> 
> --
> Aaron Knister
> NASA Center for Climate Simulation (Code 606.2)
> Goddard Space Flight Center
> (301) 286-2776
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> 
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=zfQZ_ymVgGc2EseA0szLiRxH-FgYnw7qMdx2qKo3Zes&s=2KsLTkZ-3MRyIMQhp8WwTn524NpfiKv8gwTy4P36xX4&e=

> 
> 
> ________________________________
> 
> Note: This email is for the confidential use of the named addressee(s) 
only and may contain proprietary, confidential or privileged information. 
If you are not the intended recipient, you are hereby notified that any 
review, dissemination or copying of this email is strictly prohibited, and 
to please notify the sender immediately and destroy this email and any 
attachments. Email transmission cannot be guaranteed to be secure or 
error-free. The Company, therefore, does not make any guarantees as to the 
completeness or accuracy of this email or any attachments. This email is 
for informational purposes only and does not constitute a recommendation, 
offer, request or solicitation of any kind to buy, sell, subscribe, redeem 
or perform any type of transaction of a financial product.
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> 
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=zfQZ_ymVgGc2EseA0szLiRxH-FgYnw7qMdx2qKo3Zes&s=2KsLTkZ-3MRyIMQhp8WwTn524NpfiKv8gwTy4P36xX4&e=

> 

-- 
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=zfQZ_ymVgGc2EseA0szLiRxH-FgYnw7qMdx2qKo3Zes&s=2KsLTkZ-3MRyIMQhp8WwTn524NpfiKv8gwTy4P36xX4&e=






-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180112/c3158499/attachment-0002.htm>


More information about the gpfsug-discuss mailing list