[gpfsug-discuss] memory needed for gpfs clients

Christopher Black cblack at nygenome.org
Tue Dec 1 19:07:58 GMT 2020

We tune vm-related sysctl values on our gpfs clients.
These are values we use for 256GB+ mem hpc nodes:
vm.dirty_bytes = 3435973836
vm.dirty_background_bytes = 1717986918

The vm.dirty parameters are to prevent NFS from buffering huge amounts of writes and then pushing them over the network all at once flooding out gpfs traffic.

I'd also recommend checking client gpfs parameters pagepool and/or pagepoolMaxPhysMemPct to ensure you have a reasonable and understood limit for how much memory mmfsd will use.


On 12/1/20, 1:32 PM, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Renata Maria Dart" <gpfsug-discuss-bounces at spectrumscale.org on behalf of renata at slac.stanford.edu> wrote:

    Hi, some of our gpfs clients will get stale file handles for gpfs
    mounts and it seems to be related to memory depletion.  Even after the
    memory is freed though gpfs will continue be unavailable and df will
    hang.  I have read about setting vm.min_free_kbytes as a possible fix
    for this, but wasn't sure if it was meant for a gpfs server or if a
    gpfs client would also benefit, and what value should be set.

    Thanks for any insights,


    gpfsug-discuss mailing list
    gpfsug-discuss at spectrumscale.org


This message is for the recipient’s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email.

More information about the gpfsug-discuss mailing list