[gpfsug-discuss] pagepool

Ryan Novosielski novosirj at rutgers.edu
Fri Mar 8 16:35:47 GMT 2024


What are the units on that — is that 323GB? Zero chance you need it that high on clients.

Just for perspective, our pagepool on our clients is 4GB and on the DSS-G, it is 242GB.

I would suggest that you start with the settings in /opt/lenovo/dss/bin/dssClientConfig.sh (the settings themselves are in v5.worker.dssClientConfig in the same directory), if you have a brand new config and don’t have to worry about breaking your system with the wrong values (I have to be more careful with that as some of our values are higher than those defaults already). You just made me worry that perhaps I was still running with an out-of-date value there, but the default is still to raise the pagepool for clients to 4GB if you don’t specify otherwise.

What I was told by Lenovo years ago was that this is about the level where you start not to notice any difference when you go larger. You may want to test values for this for your workloads/see whether you fill it up if it’s set to that Lenovo default and then reconsider.

You can change it for a single node with -N <nodename>, if you want to test.

--
#BlackLivesMatter
____
|| \\UTGERS,     |---------------------------*O*---------------------------
||_// the State  |         Ryan Novosielski - novosirj at rutgers.edu
|| \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
||  \\    of NJ  | Office of Advanced Research Computing - MSB A555B, Newark
     `'

On Mar 8, 2024, at 09:39, Iban Cabrillo <cabrillo at ifca.unican.es> wrote:

Good afternoon,
   We are new to the DSS system configurations. Reviewing the configuration I have seen that the default pagepool is set to this value:

    pagepool 323908133683

But not only in the DSS servers, but also in the rest of the HPC nodes and I don't know if it is an excessive value. We are noticing that some jobs are dying by "Memory cgroup out of memory: Killed process XXX", and my doubt is if this pagepool is reserving too much memory for the mmfs process in decripento of the execution of jobs.

Any advice is welcomed,

Regards, I
--

================================================================
  Ibán Cabrillo Bartolomé
  Instituto de Física de Cantabria (IFCA-CSIC)
  Santander, Spain
  Tel: +34942200969/+34669930421
  Responsible for advanced computing service (RSC)
=========================================================================================
=========================================================================================
All our suppliers must know and accept IFCA policy available at:

https://confluence.ifca.es/display/IC/Information+Security+Policy+for+External+Suppliers
==========================================================================================


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20240308/9c385702/attachment-0003.htm>


More information about the gpfsug-discuss mailing list