[gpfsug-discuss] big difference between output of 'mmlsquota' and 'du'?

Laurence Horrocks-Barlow laurence at qsplace.co.uk
Mon Sep 12 21:46:55 BST 2016


However replicated files should show up with ls as taking about double the space.

I.e. "ls -lash"

49G -r--------    1 root   root   25G Sep 12 21:11 Somefile

I know you've said you checked ls vs du for allocated space it might be worth a double check.

Also check that you haven't got a load of snapshots, especially if you have high file churn which will create new blocks; although with your figures it'd have to be very high file churn.  

-- Lauz

On 12 September 2016 21:26:51 BST, "Sobey, Richard A" <r.sobey at imperial.ac.uk> wrote:
>My thoughts exactly.
>
>Richard
>
>From: gpfsug-discuss-bounces at spectrumscale.org
>[mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of
>Buterbaugh, Kevin L
>Sent: 12 September 2016 20:08
>To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
>Subject: Re: [gpfsug-discuss] big difference between output of
>'mmlsquota' and 'du'?
>
>Hi Alex,
>
>While the numbers don’t match exactly, they’re close enough to prompt
>me to ask if data replication is possibly set to two?  Thanks…
>
>Kevin
>
>On Sep 12, 2016, at 2:03 PM, Alex Chekholko
><chekh at stanford.edu<mailto:chekh at stanford.edu>> wrote:
>
>Hi,
>
>For a fileset with a quota on it, we have mmlsquota reporting 39TB
>utilization (out of 50TB quota), with 0 in_doubt.
>
>Running a 'du' on the same directory (where the fileset is junctioned)
>shows 21TB usage.
>
>I looked for sparse files (files that report different size via ls vs
>du).  I looked at 'du --apparent-size ...'
>
>https://en.wikipedia.org/wiki/Sparse_file
>
>What else could it be?
>
>Is there some attribute I can scan for inside GPFS?
>Maybe where FILE_SIZE does not equal KB_ALLOCATED?
>https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adv.doc/bl1adv_usngfileattrbts.htm
>
>
>[root at scg-gs0 ~]# du -sm --apparent-size /srv/gsfs0/projects/gbsc/*
>3977 /srv/gsfs0/projects/gbsc/Backups
>1 /srv/gsfs0/projects/gbsc/benchmark
>13109 /srv/gsfs0/projects/gbsc/Billing
>198719 /srv/gsfs0/projects/gbsc/Clinical
>1 /srv/gsfs0/projects/gbsc/Clinical_Vendors
>1206523 /srv/gsfs0/projects/gbsc/Data
>1 /srv/gsfs0/projects/gbsc/iPoP
>123165 /srv/gsfs0/projects/gbsc/Macrogen
>58676 /srv/gsfs0/projects/gbsc/Misc
>6625890 /srv/gsfs0/projects/gbsc/mva
>1 /srv/gsfs0/projects/gbsc/Proj
>17 /srv/gsfs0/projects/gbsc/Projects
>3290502 /srv/gsfs0/projects/gbsc/Resources
>1 /srv/gsfs0/projects/gbsc/SeqCenter
>1 /srv/gsfs0/projects/gbsc/share
>514041 /srv/gsfs0/projects/gbsc/SNAP_Scoring
>1 /srv/gsfs0/projects/gbsc/TCGA_Variants
>267873 /srv/gsfs0/projects/gbsc/tools
>9597797 /srv/gsfs0/projects/gbsc/workspace
>
>(adds up to about 21TB)
>
>[root at scg-gs0 ~]# mmlsquota -j projects.gbsc --block-size=G gsfs0
>                        Block Limits  |     File Limits
>Filesystem type             GB      quota      limit   in_doubt   
>grace |    files   quota    limit in_doubt    grace  Remarks
>gsfs0      FILESET       39889      51200      51200          0    
>none |  1663212       0        0        4     none
>
>
>[root at scg-gs0 ~]# mmlsfileset gsfs0 |grep gbsc
>projects.gbsc            Linked    /srv/gsfs0/projects/gbsc
>
>Regards,
>--
>Alex Chekholko chekh at stanford.edu<mailto:chekh at stanford.edu>
>
>_______________________________________________
>gpfsug-discuss mailing list
>gpfsug-discuss at spectrumscale.org
>http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>>Kevin Buterbaugh - Senior System Administrator
>Vanderbilt University - Advanced Computing Center for Research and
>Education
>Kevin.Buterbaugh at vanderbilt.edu<mailto:Kevin.Buterbaugh at vanderbilt.edu>
>- (615)875-9633
>
>
>
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>gpfsug-discuss mailing list
>gpfsug-discuss at spectrumscale.org
>http://gpfsug.org/mailman/listinfo/gpfsug-discuss

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160912/2604b654/attachment-0002.htm>


More information about the gpfsug-discuss mailing list