[gpfsug-discuss] Performance of GPFS when filesystem is almostfull

Carl mutantllama at gmail.com
Tue Nov 7 00:12:11 GMT 2017


Thanks to all for the information.

Im happy to say that it is close to what I hoped would be the case.

Interesting to see the effect of the -n value. Reinforces the need to think
about it and not go with the defaults.

Thanks again,

Carl.


On 7 November 2017 at 03:18, Achim Rehor <Achim.Rehor at de.ibm.com> wrote:

> I have no practical experience on these numbers, however, Peters
> experience below is matching what i learned from Dan years ago.
>
> As long as the -n setting of the FS (the number of nodes potentially
> mounting the fs) is more or less matching the actual number of mounts,
> this 99.x % before degradation is expected. If you are far off with that
> -n estimate, like having it set to 32, but the actual number of mounts is
> in the thousands,
> then degradation happens earlier, since the distribution of free blocks in
> the allocation maps is not matching the actual setup as good as it could
> be.
>
> Naturally, this depends also on how you do filling of the FS. If it is
> only a small percentage of the nodes, doing the creates, then the
> distribution can
> be 'wrong' as well, and single nodes run earlier out of allocation map
> space, and need to look for free blocks elsewhere, costing RPC cycles and
> thus performance.
>
> Putting this in numbers seems quite difficult ;)
>
>
> Mit freundlichen Grüßen / Kind regards
>
> *Achim Rehor*
>
> ------------------------------
>
> Software Technical Support Specialist AIX/ Emea HPC Support
> IBM Certified Advanced Technical Expert - Power Systems with AIX
> TSCC Software Service, Dept. 7922
> Global Technology Services
>
> ------------------------------
> Phone: +49-7034-274-7862 <+49%207034%202747862>  IBM Deutschland
> E-Mail: Achim.Rehor at de.ibm.com  Am Weiher 24
>      65451 Kelsterbach
>      Germany
> <https://maps.google.com/?q=Am%0D+Weiher+24%C2%A0%C2%A0%C2%A065451%0D+Kelsterbach%C2%A0%C2%A0%C2%A0Germany&entry=gmail&source=g>
>
>
> ------------------------------
>
> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
> Geschäftsführung: Martina Koederitz (Vorsitzende), Reinhard Reschke,
> Dieter Scholz, Gregor Pillen, Ivo Koerner, Christian Noll
> Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart,
> HRB 14562 WEEE-Reg.-Nr. DE 99369940
>
>
>
>
>
> From:        Peter Smith <peter.smith at framestore.com>
> To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Date:        11/06/2017 09:17 AM
> Subject:        Re: [gpfsug-discuss] Performance of GPFS when filesystem
> is almost        full
> Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> ------------------------------
>
>
>
> Hi Carl.
>
> When we commissioned our system we ran an NFS stress tool, and filled the
> system to the top.
>
> No performance degradation was seen until it was 99.7% full.
>
> I believe that after this point it takes longer to find free blocks to
> write to.
>
> YMMV.
>
> On 6 November 2017 at 03:35, Carl <*mutantllama at gmail.com*
> <mutantllama at gmail.com>> wrote:
> Hi Folk,
>
> Does anyone have much experience with the performance of GPFS as it
> becomes close to full. In particular I am referring to split data/meta
> data, where the data pool goes over 80% utilisation.
>
> How much degradation do you see above 80% usage, 90% usage?
>
> Cheers,
>
> Carl.
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at *spectrumscale.org* <http://spectrumscale.org/>
> *http://gpfsug.org/mailman/listinfo/gpfsug-discuss*
> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
>
>
>
>
> --
> *Peter Smith* · Senior Systems Engineer
> *London* · New York · Los Angeles · Chicago · Montréal
> T  +44 (0)20 7344 8000 <+44%2020%207344%208000> · M  +44 (0)7816 123009
> <+44%20%280%297816%20123009>
> *19-23 Wells Street, London W1T 3PQ*
> <https://www.google.co.uk/maps/place/19-23+Wells+Street,+London+W1T+3PQ>
> Twitter <https://twitter.com/framestore>· Facebook
> <https://www.facebook.com/framestore>· framestore.com
> <http://www.framestore.com/>
> <https://www.framestore.com/>______________________________
> _________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20171107/da1bccec/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 7182 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20171107/da1bccec/attachment-0002.gif>


More information about the gpfsug-discuss mailing list