[gpfsug-discuss] Performance of GPFS when filesystem isalmostfull

Alex Chekholko alex at calicolabs.com
Tue Nov 7 17:50:54 GMT 2017


One of the parameters that you need to choose at filesystem creation time
is the block allocation type.  -j {cluster|scatter} parameter to mmcrfs:
https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/bl1ins_blkalmap.htm#ballmap

If you use "cluster", you will have quite high performance when the
filesystem is close to empty.  If you use "scatter", the performance will
stay the same no matter the filesystem utilization because blocks for a
given file will always be scattered randomly.

Some vendors set up their GPFS filesystem using '-j cluster' and then show
off their streaming write performance numbers. But the performance degrades
considerably as the filesystem fills up. With "scatter", the filesystem
performance is slower but stays consistent throughout its lifetime.



On Tue, Nov 7, 2017 at 1:19 AM, Daniel Kidger <daniel.kidger at uk.ibm.com>
wrote:

> I understand that this near linear performance is one of the
> differentiators of Spectrum Scale.
> Others with more field experience than me might want to comment on how
> Lustre and other distributed filesystem perform as they approaches near
> full capacity.
>
> Daniel
> [image: /spectrum_storage-banne]
>
>
> [image: Spectrum Scale Logo]
>
>
> *Dr Daniel Kidger*
> IBM Technical Sales Specialist
> Software Defined Solution Sales
>
> + <+%2044-7818%20522%20266> 44-(0)7818 522 266 <+%2044-7818%20522%20266>
> daniel.kidger at uk.ibm.com
>
> On 7 Nov 2017, at 00:12, Carl <mutantllama at gmail.com> wrote:
>
> Thanks to all for the information.
>
> Im happy to say that it is close to what I hoped would be the case.
>
> Interesting to see the effect of the -n value. Reinforces the need to
> think about it and not go with the defaults.
>
> Thanks again,
>
> Carl.
>
>
> On 7 November 2017 at 03:18, Achim Rehor <Achim.Rehor at de.ibm.com> wrote:
>
>> I have no practical experience on these numbers, however, Peters
>> experience below is matching what i learned from Dan years ago.
>>
>> As long as the -n setting of the FS (the number of nodes potentially
>> mounting the fs) is more or less matching the actual number of mounts,
>> this 99.x % before degradation is expected. If you are far off with that
>> -n estimate, like having it set to 32, but the actual number of mounts is
>> in the thousands,
>> then degradation happens earlier, since the distribution of free blocks
>> in the allocation maps is not matching the actual setup as good as it could
>> be.
>>
>> Naturally, this depends also on how you do filling of the FS. If it is
>> only a small percentage of the nodes, doing the creates, then the
>> distribution can
>> be 'wrong' as well, and single nodes run earlier out of allocation map
>> space, and need to look for free blocks elsewhere, costing RPC cycles and
>> thus performance.
>>
>> Putting this in numbers seems quite difficult ;)
>>
>>
>> Mit freundlichen Grüßen / Kind regards
>>
>> *Achim Rehor*
>>
>> ------------------------------
>>
>> Software Technical Support Specialist AIX/ Emea HPC Support
>> <_1_D95FF418D95FEE980059980B852581D0.gif>
>> IBM Certified Advanced Technical Expert - Power Systems with AIX
>> TSCC Software Service, Dept. 7922
>> Global Technology Services
>>
>> ------------------------------
>> Phone: +49-7034-274-7862 <+49%207034%202747862>  IBM Deutschland
>> E-Mail: Achim.Rehor at de.ibm.com  Am Weiher 24
>>      65451 Kelsterbach
>>      Germany
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__maps.google.com_-3Fq-3DAm-250D-2BWeiher-2B24-25C2-25A0-25C2-25A0-25C2-25A065451-250D-2BKelsterbach-25C2-25A0-25C2-25A0-25C2-25A0Germany-26entry-3Dgmail-26source-3Dg&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=zatwEi1ZS1fdfJjSR0DnoHq8vd75vVJcEyrIbaU1Hyw&s=LcAQSuZfiGE8amC7zXd_vu0W02tmvaguHINvGR6UYTs&e=>
>>
>>
>> ------------------------------
>>
>> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
>> Geschäftsführung: Martina Koederitz (Vorsitzende), Reinhard Reschke,
>> Dieter Scholz, Gregor Pillen, Ivo Koerner, Christian Noll
>> Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart,
>> HRB 14562 WEEE-Reg.-Nr. DE 99369940
>>
>>
>>
>>
>>
>> From:        Peter Smith <peter.smith at framestore.com>
>> To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
>> Date:        11/06/2017 09:17 AM
>> Subject:        Re: [gpfsug-discuss] Performance of GPFS when filesystem
>> is almost        full
>> Sent by:        gpfsug-discuss-bounces at spectrumscale.org
>> ------------------------------
>>
>>
>>
>> Hi Carl.
>>
>> When we commissioned our system we ran an NFS stress tool, and filled the
>> system to the top.
>>
>> No performance degradation was seen until it was 99.7% full.
>>
>> I believe that after this point it takes longer to find free blocks to
>> write to.
>>
>> YMMV.
>>
>> On 6 November 2017 at 03:35, Carl <*mutantllama at gmail.com*
>> <mutantllama at gmail.com>> wrote:
>> Hi Folk,
>>
>> Does anyone have much experience with the performance of GPFS as it
>> becomes close to full. In particular I am referring to split data/meta
>> data, where the data pool goes over 80% utilisation.
>>
>> How much degradation do you see above 80% usage, 90% usage?
>>
>> Cheers,
>>
>> Carl.
>>
>>
>>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at *spectrumscale.org*
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org_&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=zatwEi1ZS1fdfJjSR0DnoHq8vd75vVJcEyrIbaU1Hyw&s=ql8RA3_vpc7iOubmRTKJDfyZ78LA2tNhXA5x2oIRzNk&e=>
>> *http://gpfsug.org/mailman/listinfo/gpfsug-discuss*
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=zatwEi1ZS1fdfJjSR0DnoHq8vd75vVJcEyrIbaU1Hyw&s=mnAoejImXp73wvcmpdLPXNi3JjPAmpAMDtzqIBnK7fs&e=>
>>
>>
>>
>>
>> --
>> *Peter Smith* · Senior Systems Engineer
>> *London* · New York · Los Angeles · Chicago · Montréal
>> T  +44 (0)20 7344 8000 <+44%2020%207344%208000> · M  +44 (0)7816 123009
>> <+44%20%280%297816%20123009>
>> *19-23 Wells Street, London W1T 3PQ*
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.google.co.uk_maps_place_19-2D23-2BWells-2BStreet-2C-2BLondon-2BW1T-2B3PQ&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=zatwEi1ZS1fdfJjSR0DnoHq8vd75vVJcEyrIbaU1Hyw&s=6acxGpabcJns6eGY7kvMNHkEl3PGz10xYjcsTVCjEiY&e=>
>> Twitter
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_framestore&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=zatwEi1ZS1fdfJjSR0DnoHq8vd75vVJcEyrIbaU1Hyw&s=Gmz8hKwZaaAM9ASBufpzpKG3_-nTjU8soMsH6j9ewAE&e=>> Facebook
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.facebook.com_framestore&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=zatwEi1ZS1fdfJjSR0DnoHq8vd75vVJcEyrIbaU1Hyw&s=fKTUmDeAcNoyVhMHbCLpBlJFtnLCJmToXgkHnQwup_E&e=>> framestore.com
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.framestore.com_&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=zatwEi1ZS1fdfJjSR0DnoHq8vd75vVJcEyrIbaU1Hyw&s=8lv8uxKsqfE4NVwEEhC7KM_poV0Tzc2lF0pyfds3t0E&e=>
>>
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.framestore.com_&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=zatwEi1ZS1fdfJjSR0DnoHq8vd75vVJcEyrIbaU1Hyw&s=Pfmf23Nhd4VeH3e8EPywrFcbegexNduC0Wi0CVlYjgI&e=>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=zatwEi1ZS1fdfJjSR0DnoHq8vd75vVJcEyrIbaU1Hyw&s=lBgAfk6EsHjN156F5mngZVzk6argsIeu8gQlu9OWH_c&e=>
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=zatwEi1ZS1fdfJjSR0DnoHq8vd75vVJcEyrIbaU1Hyw&s=mnAoejImXp73wvcmpdLPXNi3JjPAmpAMDtzqIBnK7fs&e=>
>>
>>
>>
>>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=zatwEi1ZS1fdfJjSR0DnoHq8vd75vVJcEyrIbaU1Hyw&s=lBgAfk6EsHjN156F5mngZVzk6argsIeu8gQlu9OWH_c&e=>
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=zatwEi1ZS1fdfJjSR0DnoHq8vd75vVJcEyrIbaU1Hyw&s=mnAoejImXp73wvcmpdLPXNi3JjPAmpAMDtzqIBnK7fs&e=>
>>
>>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20171107/53a3680a/attachment-0002.htm>


More information about the gpfsug-discuss mailing list