[gpfsug-discuss] Clarification of mmdiag --iohist output

Buterbaugh, Kevin L Kevin.Buterbaugh at Vanderbilt.Edu
Tue Feb 19 20:26:31 GMT 2019


Hi All,

My thanks to Aaron, Sven, Steve, and whoever responded for the GPFS team.  You confirmed what I suspected … my example 10 second I/O was _from an NSD server_ … and since we’re in a 8 Gb FC SAN environment, it therefore means - correct me if I’m wrong about this someone - that I’ve got a problem somewhere in one (or more) of the following 3 components:

1) the NSD servers
2) the SAN fabric
3) the storage arrays

I’ve been looking at all of the above and none of them are showing any obvious problems.  I’ve actually got a techie from the storage array vendor stopping by on Thursday, so I’ll see if he can spot anything there.  Our FC switches are QLogic’s, so I’m kinda screwed there in terms of getting any help.  But I don’t see any errors in the switch logs and “show perf” on the switches is showing I/O rates of 50-100 MB/sec on the in use ports, so I don’t _think_ that’s the issue.

And this is the GPFS mailing list, after all … so let’s talk about the NSD servers.  Neither memory (64 GB) nor CPU (2 x quad-core Intel Xeon E5620’s) appear to be an issue.  But I have been looking at the output of “mmfsadm saferdump nsd” based on what Aaron and then Steve said.  Here’s some fairly typical output from one of the SMALL queues (I’ve checked several of my 8 NSD servers and they’re all showing similar output):

    Queue NSD type NsdQueueTraditional [244]: SMALL, threads started 12, active 3, highest 12, deferred 0, chgSize 0, draining 0, is_chg 0
     requests pending 0, highest pending 73, total processed 4859732
     mutex 0x7F3E449B8F10, reqCond 0x7F3E449B8F58, thCond 0x7F3E449B8F98, queue 0x7F3E449B8EF0, nFreeNsdRequests 29

And for a LARGE queue:

    Queue NSD type NsdQueueTraditional [8]: LARGE, threads started 12, active 1, highest 12, deferred 0, chgSize 0, draining 0, is_chg 0
     requests pending 0, highest pending 71, total processed 2332966
     mutex 0x7F3E441F3890, reqCond 0x7F3E441F38D8, thCond 0x7F3E441F3918, queue 0x7F3E441F3870, nFreeNsdRequests 31

So my large queues seem to be slightly less utilized than my small queues overall … i.e. I see more inactive large queues and they generally have a smaller “highest pending” value.

Question:  are those non-zero “highest pending” values something to be concerned about?

I have the following thread-related parameters set:

[common]
maxReceiverThreads 12
nsdMaxWorkerThreads 640
nsdThreadsPerQueue 4
nsdSmallThreadRatio 3
workerThreads 128

[serverLicense]
nsdMaxWorkerThreads 1024
nsdThreadsPerQueue 12
nsdSmallThreadRatio 1
pitWorkerThreadsPerNode 3
workerThreads 1024

Also, at the top of the “mmfsadm saferdump nsd” output I see:

Total server worker threads: running 1008, desired 147, forNSD 147, forGNR 0, nsdBigBufferSize 16777216
nsdMultiQueue: 256, nsdMultiQueueType: 1, nsdMinWorkerThreads: 16, nsdMaxWorkerThreads: 1024

Question:  is the fact that 1008 is pretty close to 1024 a concern?

Anything jump out at anybody?  I don’t mind sharing full output, but it is rather lengthy.  Is this worthy of a PMR?

Thanks!

--
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
Kevin.Buterbaugh at vanderbilt.edu<mailto:Kevin.Buterbaugh at vanderbilt.edu> - (615)875-9633

On Feb 17, 2019, at 1:01 PM, IBM Spectrum Scale <scale at us.ibm.com<mailto:scale at us.ibm.com>> wrote:

Hi Kevin,

The I/O hist shown by the command mmdiag --iohist actually depends on the node on which you are running this command from.
If you are running this on a NSD server node then it will show the time taken to complete/serve the read or write I/O operation sent from the client node.
And if you are running this on a client (or non NSD server) node then it will show the complete time taken by the read or write I/O operation requested by the client node to complete.
So in a nut shell for the NSD server case it is just the latency of the I/O done on disk by the server whereas for the NSD client case it also the latency of send and receive of I/O request to the NSD server along with the latency of I/O done on disk by the NSD server.
I hope this answers your query.


Regards, The Spectrum Scale (GPFS) team

------------------------------------------------------------------------------------------------------------------
If you feel that your question can benefit other users of  Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ibm.com%2Fdeveloperworks%2Fcommunity%2Fforums%2Fhtml%2Fforum%3Fid%3D11111111-0000-0000-0000-000000000479&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C2bfb2e8e30e64fa06c0f08d6959b2d38%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636860891056267091&sdata=%2FWFsVfr73xZcfH25vIFYC4ts7LlWDFUIoh9fLheAEwE%3D&reserved=0>.

If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact  1-800-237-5511 in the United States or your local IBM Service Center in other countries.

The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team.



From:        "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu<mailto:Kevin.Buterbaugh at Vanderbilt.Edu>>
To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date:        02/16/2019 08:18 PM
Subject:        [gpfsug-discuss] Clarification of mmdiag --iohist output
Sent by:        gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>
________________________________



Hi All,

Been reading man pages, docs, and Googling, and haven’t found a definitive answer to this question, so I knew exactly where to turn… ;-)

I’m dealing with some slow I/O’s to certain storage arrays in our environments … like really, really slow I/O’s … here’s just one example from one of my NSD servers of a 10 second I/O:

08:49:34.943186  W        data   30:41615622144   2048 10115.192  srv   dm-92                  <client IP redacted>

So here’s my question … when mmdiag —iohist tells me that that I/O took slightly over 10 seconds, is that:

1.  The time from when the NSD server received the I/O request from the client until it shipped the data back onto the wire towards the client?
2.  The time from when the client issued the I/O request until it received the data back from the NSD server?
3.  Something else?

I’m thinking it’s #1, but want to confirm.  Which one it is has very obvious implications for our troubleshooting steps.  Thanks in advance…

Kevin
—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
Kevin.Buterbaugh at vanderbilt.edu<mailto:Kevin.Buterbaugh at vanderbilt.edu>- (615)875-9633
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss<https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C2bfb2e8e30e64fa06c0f08d6959b2d38%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636860891056277100&sdata=PP%2Bs3UFJOHEIFNk7aOXJgo46GVeQr6P%2FLwgDUIGzAXQ%3D&reserved=0>



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C2bfb2e8e30e64fa06c0f08d6959b2d38%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636860891056297114&sdata=5pL67mhVyScJovkRHRqZog9bM5BZG8F2q972czIYAbA%3D&reserved=0

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20190219/7c33dac3/attachment-0005.htm>


More information about the gpfsug-discuss mailing list