[gpfsug-discuss] mmapplypolicy didn't migrate everything it should have - why not?

Marc A Kaplan makaplan at us.ibm.com
Sun Apr 16 20:15:40 BST 2017


Let's look at how mmapplypolicy does the reckoning.
Before it starts, it see your pools as:

[I] GPFS Current Data Pool Utilization in KB and %
Pool_Name                   KB_Occupied        KB_Total  Percent_Occupied
gpfs23capacity              55365193728    124983549952     44.297984614%
gpfs23data                 166747037696    343753326592     48.507759721%
system                                0               0      0.000000000% 
(no user data)
[I] 75142046 of 209715200 inodes used: 35.830520%.

Your rule says you want to migrate data to gpfs23capacity, up to 98% full:

RULE 'OldStuff'
  MIGRATE FROM POOL 'gpfs23data'
  TO POOL 'gpfs23capacity'
  LIMIT(98) WHERE ...

We scan your files and find and reckon...
[I] Summary of Rule Applicability and File Choices:
 Rule#      Hit_Cnt          KB_Hit          Chosen       KB_Chosen  
KB_Ill     Rule
     0      5255960     237675081344        1868858     67355430720  0 
RULE 'OldStuff' MIGRATE FROM POOL 'gpfs23data' TO POOL 'gpfs23capacity' 
LIMIT(98.000000) WHERE(.)

So yes, 5.25Million files match the rule, but the utility chooses 
1.868Million files that add up to 67,355GB and figures that if it migrates 
those to gpfs23capacity,
(and also figuring the other migrations  by your second rule)then gpfs23 
will end up  97.9999% full.
We show you that with our "predictions" message.

Predicted Data Pool Utilization in KB and %:
Pool_Name                   KB_Occupied        KB_Total  Percent_Occupied
gpfs23capacity             122483878944    124983549952     97.999999993%
gpfs23data                 104742360032    343753326592     30.470209865%

So that's why it chooses to migrate "only" 67GB....

See? Makes sense to me.

Questions:
Did you run with -I yes or -I defer ?

Were some of the files illreplicated or illplaced?

Did you give the cluster-wide space reckoning protocols time to see the 
changes?  mmdf is usually "behind" by some non-neglible amount of time.

What else is going on?
If  you're moving  or deleting or creating data by other means while 
mmapplypolicy is running -- it doesn't "know" about that! 

Run it again!





From:   "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   04/16/2017 09:47 AM
Subject:        [gpfsug-discuss] mmapplypolicy didn't migrate everything 
it should       have - why not?
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Hi All, 

First off, I can open a PMR for this if I need to.  Second, I am far from 
an mmapplypolicy guru.  With that out of the way … I have an mmapplypolicy 
job that didn’t migrate anywhere close to what it could / should have. 
From the log file I have it create, here is the part where it shows the 
policies I told it to invoke:

[I] Qos 'maintenance' configured as inf
[I] GPFS Current Data Pool Utilization in KB and %
Pool_Name                   KB_Occupied        KB_Total  Percent_Occupied
gpfs23capacity              55365193728    124983549952     44.297984614%
gpfs23data                 166747037696    343753326592     48.507759721%
system                                0               0      0.000000000% 
(no user data)
[I] 75142046 of 209715200 inodes used: 35.830520%.
[I] Loaded policy rules from /root/gpfs/gpfs23_migration.policy.
Evaluating policy rules with CURRENT_TIMESTAMP = 2017-04-15 at 01:13:02 UTC
Parsed 2 policy rules.

RULE 'OldStuff'
  MIGRATE FROM POOL 'gpfs23data'
  TO POOL 'gpfs23capacity'
  LIMIT(98)
  WHERE (((DAYS(CURRENT_TIMESTAMP) - DAYS(ACCESS_TIME)) > 14) AND 
(KB_ALLOCATED > 3584))

RULE 'INeedThatAfterAll'
  MIGRATE FROM POOL 'gpfs23capacity'
  TO POOL 'gpfs23data'
  LIMIT(75)
  WHERE ((DAYS(CURRENT_TIMESTAMP) - DAYS(ACCESS_TIME)) < 14)

And then the log shows it scanning all the directories and then says, "OK, 
here’s what I’m going to do":

[I] Summary of Rule Applicability and File Choices:
 Rule#      Hit_Cnt          KB_Hit          Chosen       KB_Chosen  
KB_Ill     Rule
     0      5255960     237675081344        1868858     67355430720  0 
RULE 'OldStuff' MIGRATE FROM POOL 'gpfs23data' TO POOL 'gpfs23capacity' 
LIMIT(98.000000) WHERE(.)
     1          611       236745504             611       236745504  0 
RULE 'INeedThatAfterAll' MIGRATE FROM POOL 'gpfs23capacity' TO POOL 
'gpfs23data' LIMIT(75.000000) WHERE(.)

[I] Filesystem objects with no applicable rules: 414911602.

[I] GPFS Policy Decisions and File Choice Totals:
 Chose to migrate 67592176224KB: 1869469 of 5256571 candidates;
Predicted Data Pool Utilization in KB and %:
Pool_Name                   KB_Occupied        KB_Total  Percent_Occupied
gpfs23capacity             122483878944    124983549952     97.999999993%
gpfs23data                 104742360032    343753326592     30.470209865%
system                                0               0      0.000000000% 
(no user data)

Notice that it says it’s only going to migrate less than 2 million of the 
5.25 million candidate files!!  And sure enough, that’s all it did:

[I] A total of 1869469 files have been migrated, deleted or processed by 
an EXTERNAL EXEC/script;
        0 'skipped' files and/or errors.

And, not surprisingly, the gpfs23capacity pool on gpfs23 is nowhere near 
98% full:

Disks in storage pool: gpfs23capacity (Maximum disk size allowed is 519 
TB)
eon35Ansd               58.2T       35 No       Yes          29.54T ( 51%) 
       63.93G ( 0%) 
eon35Dnsd               58.2T       35 No       Yes          29.54T ( 51%) 
       64.39G ( 0%) 
                -------------                         -------------------- 
-------------------
(pool total)           116.4T                                59.08T ( 51%) 
       128.3G ( 0%)

I don’t understand why it only migrated a small subset of what it could / 
should have?

We are doing a migration from one filesystem (gpfs21) to gpfs23 and I 
really need to stuff my gpfs23capacity pool as full of data as I can to 
keep the migration going.  Any ideas anyone?  Thanks in advance…

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and 
Education
Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170416/a47dfeb8/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 21994 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170416/a47dfeb8/attachment-0002.gif>


More information about the gpfsug-discuss mailing list