[gpfsug-discuss] GPFS Independent Fileset Limit

Martin Lischewski m.lischewski at fz-juelich.de
Fri Aug 10 15:25:17 BST 2018


Hello Olaf, hello Marc,

we in Jülich are in the middle of migrating/copying all our old 
filesystems which were created with filesystem version: 13.23 (3.5.0.7) 
to new filesystems created with GPFS 5.0.1.

We move to new filesystems mainly for two reasons: 1. We want to use the 
new increased number of subblocks. 2. We have to change our quota from 
normal "group-quota per filesystem" to "fileset-quota".

The idea is to create a separate fileset for each group/project. For the 
users the quota-computation should be much more transparent. From now on 
all data which is stored inside of their directory (fileset) counts for 
their quota independent of the ownership.

Right now we have round about 900 groups which means we will create 
round about 900 filesets per filesystem. In one filesystem we will have 
about 400million inodes (with rising tendency).

This filesystem we will back up with "mmbackup" so we talked with 
Dominic Mueller-Wicke and he recommended us to use independent filesets. 
Because then the policy-runs can be parallelized and we can increase the 
backup performance. We belive that we require these parallelized 
policies run to meet our backup performance targets.

But there are even more features we enable by using independet filesets. 
E.g. "Fileset level snapshots" and "user and group quotas inside of a 
fileset".

I did not know about performance issues regarding independent 
filesets... Can you give us some more information about this?

All in all we are strongly supporting the idea of increasing this limit.

Do I understand correctly that by opening a PMR IBM allows to increase 
this limit on special sides? I would rather like to increase the limit 
and make it official public available and supported.

Regards,

Martin


Am 10.08.2018 um 14:51 schrieb Olaf Weiser:
> Hallo Stephan,
> the limit is not a hard coded limit  - technically spoken, you can 
> raise it easily.
> But as always, it is a question of test 'n support ..
>
> I've seen customer cases, where the use of much smaller amount of 
> independent filesets generates a lot performance issues, hangs ... at 
> least noise and partial trouble ..
> it might be not the case with your specific workload, because due to 
> the fact, that you 're running already  close to 1000 ...
>
> I suspect , this number of 1000 file sets  - at the time of 
> introducing it - was as also just that one had to pick a number...
>
> ... turns out.. that a general commitment to support > 1000 
> ind.fileset is more or less hard.. because what uses cases should we 
> test / support
> I think , there might be a good chance for you , that for your 
> specific workload, one would allow and support more than 1000
>
> do you still have a PMR for your side for this ?  - if not - I know .. 
> open PMRs is an additional ...but could you please ..
> then we can decide .. if raising the limit is an option for you ..
>
>
>
>
>
> Mit freundlichen Grüßen / Kind regards
>
>
> Olaf Weiser
>
> EMEA Storage Competence Center Mainz, German / IBM Systems, Storage 
> Platform,
> -------------------------------------------------------------------------------------------------------------------------------------------
> IBM Deutschland
> IBM Allee 1
> 71139 Ehningen
> Phone: +49-170-579-44-66
> E-Mail: olaf.weiser at de.ibm.com
> -------------------------------------------------------------------------------------------------------------------------------------------
> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
> Geschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, 
> Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner
> Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht 
> Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940
>
>
>
> From: "Peinkofer, Stephan" <Stephan.Peinkofer at lrz.de>
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Cc: Doris Franke <doris.franke at de.ibm.com>, Uwe Tron 
> <utron at lenovo.com>, Dorian Krause <d.krause at fz-juelich.de>
> Date: 08/10/2018 01:29 PM
> Subject: [gpfsug-discuss] GPFS Independent Fileset Limit
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> ------------------------------------------------------------------------
>
>
>
> Dear IBM and GPFS List,
>
> we at the Leibniz Supercomputing Centre and our GCS Partners from the 
> Jülich Supercomputing Centre will soon be hitting the current 
> Independent Fileset Limit of 1000 on a number of our GPFS Filesystems.
>
> There are also a number of RFEs from other users open, that target 
> this limitation:
> _https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=56780_
> https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=120534_
> __https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=106530_
> _https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=85282_
>
> I know GPFS Development was very busy fulfilling the CORAL 
> requirements but maybe now there is again some time to improve 
> something else.
>
> If there are any other users on the list that are approaching the 
> current limitation in independent filesets, please take some time and 
> vote for the RFEs above.
>
> Many thanks in advance and have a nice weekend.
> Best Regards,
> Stephan Peinkofer
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180810/8f0fc94b/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5118 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180810/8f0fc94b/attachment-0002.bin>


More information about the gpfsug-discuss mailing list