[gpfsug-discuss] frequent OOM killer due to high memory usage of mmfsd

Christian Petersson christian.petersson at isstech.io
Wed Sep 6 21:33:31 BST 2023


Hi,
This is a settings, we had the exact same issue in the past and when we
change the following parameters it has never killed the CES nodes anymore.

maxFilesToCache=1000000

maxStatCache=100000


Thanks

Christian

On Wed, 6 Sept 2023 at 20:59, Christoph Martin <martin at uni-mainz.de> wrote:

> Hi all,
>
> on a three node GPFS cluster with CES enabled and AFM-DR mirroring to a
> second cluster we see frequent OOM killer events due to a constantly
> growing mmfsd.
> The machines have 256G memory. The pagepool is configured to 16G.
> The GPFS version is 5.1.6-1.
> After a restart mmfsd rapidly grows to about 100G usage and grows over
> some days up to 250G virtual and 220G physical memory usage.
> OOMkiller tries kill process like pmcollector or others and sometime
> kills mmfsd.
>
> Does anybody see a similar behavior?
> Any guess what could help with this problem?
>
> Regards
> Christoph Martin
>
> --
> Christoph Martin
> Zentrum für Datenverarbeitung (ZDV)
> Leiter Unix & Cloud
>
> Johannes Gutenberg-Universität Mainz
> Anselm Franz von Bentzel-Weg 12, 55128 Mainz
> Tel: +49 6131 39 26337
> martin at uni-mainz.de
> www.zdv.uni-mainz.de
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>


-- 
Med Vänliga Hälsningar
Christian Petersson

E-Post: Christian.Petersson at isstech.io
Mobil: 070-3251577
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20230906/a3011a0a/attachment.htm>


More information about the gpfsug-discuss mailing list