[gpfsug-discuss] mmapplypolicy didn't migrate everything it should have - why not?
Buterbaugh, Kevin L
Kevin.Buterbaugh at Vanderbilt.Edu
Sun Apr 16 14:47:20 BST 2017
Hi All,
First off, I can open a PMR for this if I need to. Second, I am far from an mmapplypolicy guru. With that out of the way … I have an mmapplypolicy job that didn’t migrate anywhere close to what it could / should have. From the log file I have it create, here is the part where it shows the policies I told it to invoke:
[I] Qos 'maintenance' configured as inf
[I] GPFS Current Data Pool Utilization in KB and %
Pool_Name KB_Occupied KB_Total Percent_Occupied
gpfs23capacity 55365193728 124983549952 44.297984614%
gpfs23data 166747037696 343753326592 48.507759721%
system 0 0 0.000000000% (no user data)
[I] 75142046 of 209715200 inodes used: 35.830520%.
[I] Loaded policy rules from /root/gpfs/gpfs23_migration.policy.
Evaluating policy rules with CURRENT_TIMESTAMP = 2017-04-15 at 01:13:02 UTC
Parsed 2 policy rules.
RULE 'OldStuff'
MIGRATE FROM POOL 'gpfs23data'
TO POOL 'gpfs23capacity'
LIMIT(98)
WHERE (((DAYS(CURRENT_TIMESTAMP) - DAYS(ACCESS_TIME)) > 14) AND (KB_ALLOCATED > 3584))
RULE 'INeedThatAfterAll'
MIGRATE FROM POOL 'gpfs23capacity'
TO POOL 'gpfs23data'
LIMIT(75)
WHERE ((DAYS(CURRENT_TIMESTAMP) - DAYS(ACCESS_TIME)) < 14)
And then the log shows it scanning all the directories and then says, "OK, here’s what I’m going to do":
[I] Summary of Rule Applicability and File Choices:
Rule# Hit_Cnt KB_Hit Chosen KB_Chosen KB_Ill Rule
0 5255960 237675081344 1868858 67355430720 0 RULE 'OldStuff' MIGRATE FROM POOL 'gpfs23data' TO POOL 'gpfs23capacity' LIMIT(98.000000) WHERE(.)
1 611 236745504 611 236745504 0 RULE 'INeedThatAfterAll' MIGRATE FROM POOL 'gpfs23capacity' TO POOL 'gpfs23data' LIMIT(75.000000) WHERE(.)
[I] Filesystem objects with no applicable rules: 414911602.
[I] GPFS Policy Decisions and File Choice Totals:
Chose to migrate 67592176224KB: 1869469 of 5256571 candidates;
Predicted Data Pool Utilization in KB and %:
Pool_Name KB_Occupied KB_Total Percent_Occupied
gpfs23capacity 122483878944 124983549952 97.999999993%
gpfs23data 104742360032 343753326592 30.470209865%
system 0 0 0.000000000% (no user data)
Notice that it says it’s only going to migrate less than 2 million of the 5.25 million candidate files!! And sure enough, that’s all it did:
[I] A total of 1869469 files have been migrated, deleted or processed by an EXTERNAL EXEC/script;
0 'skipped' files and/or errors.
And, not surprisingly, the gpfs23capacity pool on gpfs23 is nowhere near 98% full:
Disks in storage pool: gpfs23capacity (Maximum disk size allowed is 519 TB)
eon35Ansd 58.2T 35 No Yes 29.54T ( 51%) 63.93G ( 0%)
eon35Dnsd 58.2T 35 No Yes 29.54T ( 51%) 64.39G ( 0%)
------------- -------------------- -------------------
(pool total) 116.4T 59.08T ( 51%) 128.3G ( 0%)
I don’t understand why it only migrated a small subset of what it could / should have?
We are doing a migration from one filesystem (gpfs21) to gpfs23 and I really need to stuff my gpfs23capacity pool as full of data as I can to keep the migration going. Any ideas anyone? Thanks in advance…
—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
Kevin.Buterbaugh at vanderbilt.edu<mailto:Kevin.Buterbaugh at vanderbilt.edu> - (615)875-9633
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170416/f29ee74a/attachment-0001.htm>
More information about the gpfsug-discuss
mailing list