[gpfsug-discuss] Perfmon and GUI

Mark Bush Mark.Bush at siriuscom.com
Wed Apr 26 14:26:08 BST 2017


My saga has come to an end.  Turns out to get perf stats for NFS you need the gpfs.pm-ganesha package - duh.  I typically do manual installs of scale so I just missed this one as it was buried in /usr/lpp/mmfs/4.2.3.0/zimon_rpms/rhel7.  Anyway, package installed and now I get NFS stats in the gui and from cli.


From: "Sobey, Richard A" <r.sobey at imperial.ac.uk>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: Tuesday, April 25, 2017 at 9:31 AM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Perfmon and GUI

No worries Mark. We don’t use NFS here (yet) so I can’t help there.

Glad I could help.

Richard

From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mark Bush
Sent: 25 April 2017 15:29
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Perfmon and GUI

Update:  So SMB monitoring is now working after copying all files per Richard’s recommendation (thank you sir) and restarting pmsensors, pmcollector, and gpfsfui.  Sadly, NFS monitoring isn’t.  It doesn’t work from the cli either though.  So clearly, something is up with that part.  I continue to troubleshoot.

From: Mark Bush <Mark.Bush at siriuscom.com<mailto:Mark.Bush at siriuscom.com>>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date: Tuesday, April 25, 2017 at 9:13 AM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Perfmon and GUI

Interesting.  Some files were indeed already there but it was missing a few NFSIO.cfg being the most notable to me.  I’ve gone ahead and copied those to all my nodes (just three in this cluster) and restarted services.  Still no luck.  I’m going to restart the GUI service next to see if that makes a difference.  Interestingly I can do things like mmperfmon query smb2 and that tends to work and give me real data so not sure where the breakdown is in the GUI.


Mark

From: "Sobey, Richard A" <r.sobey at imperial.ac.uk<mailto:r.sobey at imperial.ac.uk>>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date: Tuesday, April 25, 2017 at 8:44 AM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Perfmon and GUI

I would have thought this would be fixed by now as this happened to me in 4.2.1-(0?) – here’s what support said. Can you try? I think you’ve already got the relevant bits in your .cfg files so it should just be a case of copying the files across and restarting pmsensors and pmcollector.

Again bear in mind this affected me on 4.2.1 and you’re using 4.2.3 so ymmv..

“
I spoke with development and normally these files would be copied over
to /opt/IBM/zimon when using the automatic installer but since this case
doesn't use the installer we have to copy them over manually. We
acknowledge this should be in the docs, and the reason it is not
included in pmsensors rpm is due to the fact these do not come from the
zimon team.

The following files can be copied over to /opt/IBM/zimon

[root at node1 default]# pwd
/usr/lpp/mmfs/4.2.1.0/installer/cookbooks/zimon_on_gpfs/files/default

[root at node1 default]# ls
CTDBDBStats.cfg  CTDBStats.cfg  NFSIO.cfg  SMBGlobalStats.cfg
SMBSensors.cfg  SMBStats.cfg  ZIMonCollector.cfg
“

Richard

From: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org> [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mark Bush
Sent: 25 April 2017 14:28
To: gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>
Subject: [gpfsug-discuss] Perfmon and GUI

Anyone know why in the GUI when I go to look at things like nodes and select a protocol node and then pick NFS or SMB why it has the boxes where a graph is supposed to be and it has a Red circled X and says “Performance collector did not return any data”?
I’ve added the things from the link into my protocol Nodes /opt/IBM/zimon/ZIMonSensors.cfg file https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adv.doc/bl1adv_configuringthePMT.htm

Also restarted both pmsensors and pmcollector on the nodes.  What am I missing?  Here’s my ZIMonSensors.cfg file

[root at n3 zimon]# cat ZIMonSensors.cfg
cephMon = "/opt/IBM/zimon/CephMonProxy"
cephRados = "/opt/IBM/zimon/CephRadosProxy"
colCandidates = "n1"
colRedundancy = 1
collectors = {
        host = "n1"
        port = "4739"
}
config = "/opt/IBM/zimon/ZIMonSensors.cfg"
ctdbstat = ""
daemonize = T
hostname = ""
ipfixinterface = "0.0.0.0"
logfile = "/var/log/zimon/ZIMonSensors.log"
loglevel = "info"
mmcmd = "/opt/IBM/zimon/MMCmdProxy"
mmdfcmd = "/opt/IBM/zimon/MMDFProxy"
mmpmon = "/opt/IBM/zimon/MmpmonSockProxy"
piddir = "/var/run"
release = "4.2.3-0"
sensors = {
        name = "CPU"
        period = 1
},
{
        name = "Load"
        period = 1
},
{
        name = "Memory"
        period = 1
},
{
        name = "Network"
        period = 1
},
{
        name = "Netstat"
        period = 10
},
{
        name = "Diskstat"
        period = 0
},
{
        name = "DiskFree"
        period = 600
},
{
        name = "GPFSDisk"
        period = 0
},
{
        name = "GPFSFilesystem"
        period = 1
},
{
        name = "GPFSNSDDisk"
        period = 0
        restrict = "nsdNodes"
},
{
        name = "GPFSPoolIO"
        period = 0
},
{
        name = "GPFSVFS"
        period = 1
},
{
        name = "GPFSIOC"
        period = 0
},
{
        name = "GPFSVIO"
        period = 0
},
{
        name = "GPFSPDDisk"
        period = 0
        restrict = "nsdNodes"
},
{
        name = "GPFSvFLUSH"
        period = 0
},
{
        name = "GPFSNode"
        period = 1
},
{
        name = "GPFSNodeAPI"
        period = 1
},
{
        name = "GPFSFilesystemAPI"
        period = 1
},
{
        name = "GPFSLROC"
        period = 0
},
{
        name = "GPFSCHMS"
        period = 0
},
{
        name = "GPFSAFM"
        period = 0
},
{
        name = "GPFSAFMFS"
        period = 0
},
{
        name = "GPFSAFMFSET"
        period = 0
},
{
        name = "GPFSRPCS"
        period = 10
},
{
        name = "GPFSWaiters"
        period = 10
},
{
        name = "GPFSFilesetQuota"
        period = 3600
},
{
        name = "GPFSDiskCap"
        period = 0
},
{
        name = "GPFSFileset"
        period = 0
        restrict = "n1"
},
{
        name = "GPFSPool"
        period = 0
        restrict = "n1"
},
{
        name = "Infiniband"
        period = 0
},
{
        name = "CTDBDBStats"
        period = 1
        type = "Generic"
},
{
        name = "CTDBStats"
        period = 1
        type = "Generic"
},
{
        name = "NFSIO"
        period = 1
        type = "Generic"
},
{
        name = "SMBGlobalStats"
        period = 1
        type = "Generic"
},
{
        name = "SMBStats"
        period = 1
        type = "Generic"
}
smbstat = ""


This message (including any attachments) is intended only for the use of the individual or entity to which it is addressed and may contain information that is non-public, proprietary, privileged, confidential, and exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. This message may be viewed by parties at Sirius Computer Solutions other than those named in the message header. This message does not contain an official representation of Sirius Computer Solutions. If you have received this communication in error, notify Sirius Computer Solutions immediately and (i) destroy this message if a facsimile or (ii) delete this message immediately if this is an electronic communication. Thank you.
Sirius Computer Solutions<http://www.siriuscom.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170426/d03c4f4f/attachment-0002.htm>


More information about the gpfsug-discuss mailing list