[gpfsug-discuss] Associating I/O operations with files/processes

Andreas Petzold (SCC) andreas.petzold at kit.edu
Tue May 30 13:16:40 BST 2017


	Dear group,

first a quick introduction: at KIT we are running a 20+PB storage system with several large (1-9PB) file systems. We have a 14 node NSD server cluster and 5 small (~10 nodes) protocol node clusters which each mount one of the file systems. The protocol nodes run server software (dCache, xrootd) specific to our users which primarily are the LHC experiments at CERN. GPFS version is 4.2.2 everywhere. All servers are connected via IB, while the protocol nodes communicate via Ethernet to their clients.

Now let me describe the problem we are facing. Since a few days, one of the protocol nodes shows a very strange and as of yet unexplained I/O behaviour. Before we were usually seeing reads like this (iohist example from a well behaved node):

14:03:37.637526  R        data   32:138835918848  8192   46.626  cli  0A417D79:58E3B179    172.18.224.19 
14:03:37.660177  R        data   18:12590325760   8192   25.498  cli  0A4179AD:58E3AE66    172.18.224.14 
14:03:37.640660  R        data   15:106365067264  8192   45.682  cli  0A4179AD:58E3ADD7    172.18.224.14 
14:03:37.657006  R        data   35:130482421760  8192   30.872  cli  0A417DAD:58E3B266    172.18.224.21 
14:03:37.643908  R        data   33:107847139328  8192   45.571  cli  0A417DAD:58E3B206    172.18.224.21 

Since a few days we see this on the problematic node:

14:06:27.253537  R        data   46:126258287872     8   15.474  cli  0A4179AB:58E3AE54    172.18.224.13 
14:06:27.268626  R        data   40:137280768624     8    0.395  cli  0A4179AD:58E3ADE3    172.18.224.14 
14:06:27.269056  R        data   46:56452781528      8    0.427  cli  0A4179AB:58E3AE54    172.18.224.13 
14:06:27.269417  R        data   47:97273159640      8    0.293  cli  0A4179AD:58E3AE5A    172.18.224.14 
14:06:27.269293  R        data   49:59102786168      8    0.425  cli  0A4179AD:58E3AE72    172.18.224.14 
14:06:27.269531  R        data   46:142387326944     8    0.340  cli  0A4179AB:58E3AE54    172.18.224.13 
14:06:27.269377  R        data   28:102988517096     8    0.554  cli  0A417879:58E3AD08    172.18.224.10

The number of read ops has gone up by O(1000) which is what one would expect when going from 8192 sector reads to 8 sector reads.

We have already excluded problems of node itself so we are focusing on the applications running on the node. What we'd like to to is to associate the I/O requests either with files or specific processes running on the machine in order to be able to blame the correct application. Can somebody tell us, if this is possible and if now, if there are other ways to understand what application is causing this?

	Thanks,

		Andreas

-- 

  Karlsruhe Institute of Technology (KIT)
  Steinbuch Centre for Computing (SCC)

  Andreas Petzold

  Hermann-von-Helmholtz-Platz 1, Building 449, Room 202
  D-76344 Eggenstein-Leopoldshafen

  Tel: +49 721 608 24916
  Fax: +49 721 608 24972
  Email: petzold at kit.edu
  www.scc.kit.edu

  KIT – The Research University in the Helmholtz Association

  Since 2010, KIT has been certified as a family-friendly university.


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5323 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170530/c0758771/attachment-0001.bin>


More information about the gpfsug-discuss mailing list