<font size=2 face="sans-serif">Can you provide the output of "pmap
4444"? If there's no "pmap" command on your system, then
get the memory maps of mmfsd from file of /proc/4444/maps.</font><br><br><font size=2 face="sans-serif">Regards, The Spectrum Scale (GPFS) team<br><br>------------------------------------------------------------------------------------------------------------------<br>If you feel that your question can benefit other users of Spectrum
Scale (GPFS), then please post it to the public IBM developerWroks Forum
at </font><a href="https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479"><font size=2 face="sans-serif">https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479</font></a><font size=2 face="sans-serif">.
<br><br>If your query concerns a potential software error in Spectrum Scale (GPFS)
and you have an IBM software maintenance contract please contact 1-800-237-5511
in the United States or your local IBM Service Center in other countries.
<br><br>The forum is informally monitored as time permits and should not be used
for priority messages to the Spectrum Scale (GPFS) team.</font><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">Peter Childs <p.childs@qmul.ac.uk></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">"gpfsug-discuss@spectrumscale.org"
<gpfsug-discuss@spectrumscale.org></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">07/24/2017 10:22 PM</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Re: [gpfsug-discuss]
Gpfs Memory Usaage Keeps going up and we
don't know
why.</font><br><font size=1 color=#5f5f5f face="sans-serif">Sent by:
</font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr noshade><br><br><br><br><font size=3>top</font><br><br><font size=3>but ps gives the same value.</font><br><br><font size=3>[</font><a href=mailto:root@dn29><font size=3 color=blue><u>root@dn29</u></font></a><font size=3>~]# ps auww -q 4444</font><br><font size=3>USER PID %CPU %MEM VSZ
RSS TTY STAT START TIME COMMAND</font><br><font size=3>root 4444 2.7 22.3 10537600
5472580 ? S<Ll Jul12 466:13 /usr/lpp/mmfs/bin/mmfsd</font><br><br><font size=3>Thanks for the help</font><br><br><font size=3>Peter.</font><br><br><br><font size=3>On Mon, 2017-07-24 at 14:10 +0000, Jim Doherty wrote:</font><br><font size=2 face="Helvetica">How are you identifying the high
memory usage? </font><br><font size=2 face="Helvetica"><br></font><br><font size=2 face="Arial">On Monday, July 24, 2017 9:30 AM, Peter Childs
<p.childs@qmul.ac.uk> wrote:</font><br><font size=3 face="Helvetica"><br></font><br><font size=3 face="Helvetica">I've had a look at mmfsadm dump malloc
and it looks to agree with the output from mmdiag --memory. and does not
seam to account for the excessive memory usage.</font><br><br><font size=3 face="Helvetica">The new machines do have idleSocketTimout
set to 0 from what your saying it could be related to keeping that many
connections between nodes working.</font><br><br><font size=3 face="Helvetica">Thanks in advance</font><br><br><font size=3 face="Helvetica">Peter.</font><br><br><br><br><br><font size=3 face="Helvetica">[</font><a href=mailto:root@dn29 target=_blank><font size=3 color=blue face="Helvetica"><u>root@dn29</u></font></a><font size=3 face="Helvetica">~]# mmdiag --memory</font><br><br><font size=3 face="Helvetica">=== mmdiag: memory ===</font><br><font size=3 face="Helvetica">mmfsd heap size: 2039808 bytes</font><br><br><br><font size=3 face="Helvetica">Statistics for MemoryPool id 1 ("Shared
Segment (EPHEMERAL)")</font><br><font size=3 face="Helvetica"> 128
bytes in use</font><br><font size=3 face="Helvetica"> 17500049370 hard limit on
memory usage</font><br><font size=3 face="Helvetica"> 1048576 bytes
committed to regions</font><br><font size=3 face="Helvetica">
1 number of regions</font><br><font size=3 face="Helvetica"> 555
allocations</font><br><font size=3 face="Helvetica"> 555
frees</font><br><font size=3 face="Helvetica">
0 allocation failures</font><br><br><br><font size=3 face="Helvetica">Statistics for MemoryPool id 2 ("Shared
Segment")</font><br><font size=3 face="Helvetica"> 42179592 bytes in
use</font><br><font size=3 face="Helvetica"> 17500049370 hard limit on
memory usage</font><br><font size=3 face="Helvetica"> 56623104 bytes committed
to regions</font><br><font size=3 face="Helvetica">
9 number of regions</font><br><font size=3 face="Helvetica"> 100027 allocations</font><br><font size=3 face="Helvetica"> 79624
frees</font><br><font size=3 face="Helvetica">
0 allocation failures</font><br><br><br><font size=3 face="Helvetica">Statistics for MemoryPool id 3 ("Token
Manager")</font><br><font size=3 face="Helvetica"> 2099520 bytes
in use</font><br><font size=3 face="Helvetica"> 17500049370 hard limit on
memory usage</font><br><font size=3 face="Helvetica"> 16778240 bytes committed
to regions</font><br><font size=3 face="Helvetica">
1 number of regions</font><br><font size=3 face="Helvetica">
4 allocations</font><br><font size=3 face="Helvetica">
0 frees</font><br><font size=3 face="Helvetica">
0 allocation failures</font><br><br><br><font size=3 face="Helvetica">On Mon, 2017-07-24 at 13:11 +0000, Jim
Doherty wrote:</font><br><font size=2 face="Helvetica">There are 3 places that the GPFS mmfsd
uses memory the pagepool plus 2 shared memory segments.
To see the memory utilization of the shared memory segments run the command
mmfsadm dump malloc . The statistics for memory pool
id 2 is where maxFilesToCache/maxStatCache objects are and
the manager nodes use memory pool id 3 to track the MFTC/MSC objects.
</font><br><br><font size=2 face="Helvetica">You might want to upgrade to later PTF
as there was a PTF to fix a memory leak that occurred in tscomm associated
with network connection drops. </font><br><font size=2 face="Helvetica"><br></font><br><font size=2 face="Arial">On Monday, July 24, 2017 5:29 AM, Peter Childs
<p.childs@qmul.ac.uk> wrote:</font><br><font size=3 face="Helvetica"><br></font><br><font size=3 face="Helvetica">We have two GPFS clusters.</font><br><br><font size=3 face="Helvetica">One is fairly old and running 4.2.1-2
and non CCR and the nodes run</font><br><font size=3 face="Helvetica">fine using up about 1.5G of memory and
is consistent (GPFS pagepool is</font><br><font size=3 face="Helvetica">set to 1G, so that looks about right.)</font><br><br><font size=3 face="Helvetica">The other one is "newer" running
4.2.1-3 with CCR and the nodes keep</font><br><font size=3 face="Helvetica">increasing in there memory usage, starting
at about 1.1G and are find</font><br><font size=3 face="Helvetica">for a few days however after a while
they grow to 4.2G which when the</font><br><font size=3 face="Helvetica">node need to run real work, means the
work can't be done.</font><br><br><font size=3 face="Helvetica">I'm losing track of what maybe different
other than CCR, and I'm trying</font><br><font size=3 face="Helvetica">to find some more ideas of where to look.</font><br><br><font size=3 face="Helvetica">I'm checked all the standard things like
pagepool and maxFilesToCache</font><br><font size=3 face="Helvetica">(set to the default of 4000), workerThreads
is set to 128 on the new</font><br><font size=3 face="Helvetica">gpfs cluster (against default 48 on the
old) </font><br><br><font size=3 face="Helvetica">I'm not sure what else to look at on
this one hence why I'm asking the</font><br><font size=3 face="Helvetica">community.</font><br><br><font size=3 face="Helvetica">Thanks in advance</font><br><br><font size=3 face="Helvetica">Peter Childs</font><br><font size=3 face="Helvetica">ITS Research Storage</font><br><font size=3 face="Helvetica">Queen Mary University of London.</font><br><font size=3 face="Helvetica">_______________________________________________</font><br><font size=3 face="Helvetica">gpfsug-discuss mailing list</font><br><font size=3 face="Helvetica">gpfsug-discuss at spectrumscale.org</font><br><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target=_blank><font size=3 color=blue face="Helvetica"><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><br><font size=3 face="Helvetica"><br></font><br><font size=3 face="Helvetica">_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target=_blank><font size=3 color=blue face="Helvetica"><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><font size=3 face="Helvetica"><br></font><br><font size=3 face="Helvetica">-- </font><br><font size=3 face="Helvetica">Peter Childs</font><br><font size=3 face="Helvetica">ITS Research Storage</font><br><font size=3 face="Helvetica">Queen Mary, University of London</font><br><br><font size=3 face="Helvetica">_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org</font><font size=3 color=blue face="Helvetica"><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target=_blank><font size=3 color=blue face="Helvetica"><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><br><font size=3 face="Helvetica"><br></font><br><tt><font size=3>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><tt><font size=3><br></font></tt><br><tt><font size=3>-- </font></tt><br><font size=3>Peter Childs</font><br><font size=3>ITS Research Storage</font><br><font size=3>Queen Mary, University of London</font><br><tt><font size=2>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br></font></tt><br><br><BR>