[gpfsug-discuss] dependent versus independent filesets

leslie elliott leslie.james.elliott at gmail.com
Wed Jul 8 00:19:20 BST 2020


as long as your currently do not need more than 1000 on a filesystem

On Wed, 8 Jul 2020 at 04:20, Daniel Kidger <daniel.kidger at uk.ibm.com> wrote:

> It is worth noting that Independent Filesets are a relatively recent
> addition to Spectrum Scale, compared to Dependant Filesets. They havesolved
> some of the limitations of the former.
>
>
> My view would be to always use Independent FIlesets unless there is a
> particular reason to use Dependant ones.
>
> Daniel
>
> _________________________________________________________
> *Daniel Kidger Ph.D.*
> IBM Technical Sales Specialist
> Spectrum Scale, Spectrum Discover  and IBM Cloud Object Store
>
> +44-(0)7818 522 266
> daniel.kidger at uk.ibm.com
>
> <https://www.youracclaim.com/badges/687cf790-fe65-4a92-b129-d23ae41862ac/public_url>
> <https://www.youracclaim.com/badges/8153c6a7-3e02-40be-87ee-24e27ae9459c/public_url>
> <https://www.youracclaim.com/badges/78197e2c-4277-4ec9-808b-ad6abe1e1b16/public_url>
>
>
>
>
> ----- Original message -----
> From: "Frederick Stock" <stockf at us.ibm.com>
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> To: gpfsug-discuss at spectrumscale.org
> Cc: gpfsug-discuss at spectrumscale.org
> Subject: [EXTERNAL] Re: [gpfsug-discuss] dependent versus independent
> filesets
> Date: Tue, Jul 7, 2020 17:25
>
> One comment about inode preallocation.  There was a time when inode
> creation was performance challenged but in my opinion that is no longer the
> case, unless you have need for file creates to complete at extreme speed.
> In my experience it is the rare customer that requires extremely fast file
> create times so pre-allocation is not truly necessary.  As was noted once
> an inode is allocated it cannot be deallocated.  The more important item is
> the maximum inodes defined for a fileset or file system.  Yes, those do
> need to be monitored so they can be increased if necessary to avoid out of
> space errors.
>
> Fred
> __________________________________________________
> Fred Stock | IBM Pittsburgh Lab | 720-430-8821
> stockf at us.ibm.com
>
>
>
> ----- Original message -----
> From: "Wahl, Edward" <ewahl at osc.edu>
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Cc:
> Subject: [EXTERNAL] Re: [gpfsug-discuss] dependent versus independent
> filesets
> Date: Tue, Jul 7, 2020 11:59 AM
>
> We also went with independent filesets for both backup (and quota) reasons
> for several years now, and have stuck with this across to 5.x.  However we
> still maintain a minor number of dependent filesets for administrative use.
>     Being able to mmbackup on many filesets at once can increase your
> parallelization _quite_ nicely!  We create and delete the individual snaps
> before and after each backup, as you may expect.  Just be aware that if you
> do massive numbers of fast snapshot deletes and creates you WILL reach a
> point where you will run into issues due to quiescing compute clients, and
> that certain types of workloads have issues with snapshotting in general.
>
> You have to more closely watch what you pre-allocate, and what you have
> left in the common metadata/inode pool.  Once allocated, even if not being
> used, you cannot reduce the inode allocation without removing the fileset
> and re-creating.  (say a fileset user had 5 million inodes and now only
> needs 500,000)
>
> Growth can also be an issue if you do NOT fully pre-allocate each space.
> This can be scary if you are not used to over-subscription in general.  But
> I imagine that most sites have some decent % of oversubscription if they
> use filesets and quotas.
>
> Ed
> OSC
>
> -----Original Message-----
> From: gpfsug-discuss-bounces at spectrumscale.org <
> gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Skylar Thompson
> Sent: Tuesday, July 7, 2020 10:00 AM
> To: gpfsug-discuss at spectrumscale.org
> Subject: Re: [gpfsug-discuss] dependent versus independent filesets
>
> We wanted to be able to snapshot and backup filesets separately with
> mmbackup, so went with independent filesets.
>
> On Tue, Jul 07, 2020 at 08:37:46AM -0500, Damir Krstic wrote:
> > We are deploying our new ESS and are considering moving to independent
> > filesets. The snapshot per fileset feature appeals to us.
> >
> > Has anyone considered independent vs. dependent filesets and what was
> > your reasoning to go with one as opposed to the other? Or perhaps you
> > opted to have both on your filesystem, and if, what was the reasoning
> for it?
> >
> > Thank you.
> > Damir
>
> > _______________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-
> > discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVY
> > vcGNh4M_no$
>
>
> --
> -- Skylar Thompson (skylar2 at u.washington.edu)
> -- Genome Sciences Department (UW Medicine), System Administrator
> -- Foege Building S046, (206)-685-7354
> -- Pronouns: He/Him/His
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
>
> https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVYvcGNh4M_no$
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20200708/3f4811cd/attachment-0002.htm>


More information about the gpfsug-discuss mailing list