[gpfsug-discuss] RAID type for system pool
Frederick Stock
stockf at us.ibm.com
Mon Sep 10 20:49:36 BST 2018
My guess is that the "metadata" IO is for either for directory data since
directories are considered metadata, or fileset metadata.
Fred
__________________________________________________
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
stockf at us.ibm.com
From: "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 09/10/2018 02:27 PM
Subject: [gpfsug-discuss] RAID type for system pool
Sent by: gpfsug-discuss-bounces at spectrumscale.org
Hi All,
So while I’m waiting for the purchase of new hardware to go thru, I’m
trying to gather more data about the current workload. One of the things
I’m trying to do is get a handle on the ratio of reads versus writes for
my metadata.
I’m using “mmdiag —iohist” … in this case “dm-12” is one of my
metadataOnly disks and I’m running this on the primary NSD server for that
NSD. I’m seeing output like:
11:22:13.931117 W inode 4:299844163 1 0.448 srv dm-12
<redacted>
11:22:13.932344 R metadata 4:36659676 4 0.307 srv dm-12
<redacted>
11:22:13.932005 W logData 4:49676176 1 0.726 srv dm-12
<redacted>
And I’m confused as to the difference between “inode” and “metadata” (I at
least _think_ I understand “logData”)?!? The man page for mmdiag doesn’t
help and I’ve not found anything useful yet in my Googling.
This is on a filesystem that currently uses 512 byte inodes, if that
matters. Thanks…
Kevin
—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and
Education
Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180910/47d5d2d4/attachment-0002.htm>
More information about the gpfsug-discuss
mailing list