<font size=2 face="sans-serif">Let's look at how mmapplypolicy does the
reckoning.</font><br><font size=2 face="sans-serif">Before it starts, it see your pools
as:</font><br><br><font size=3>[I] GPFS Current Data Pool Utilization in KB and %</font><br><font size=3>Pool_Name
KB_Occupied KB_Total Percent_Occupied</font><br><font size=3>gpfs23capacity
55365193728 124983549952 44.297984614%</font><br><font size=3>gpfs23data
166747037696 343753326592 48.507759721%</font><br><font size=3>system
0
0 0.000000000%
(no user data)</font><br><font size=3>[I] 75142046 of 209715200 inodes used: 35.830520%.</font><br><font size=2 face="sans-serif"><br>Your rule says you want to migrate data to gpfs23capacity, up to 98% full:</font><br><br><font size=3>RULE 'OldStuff'</font><br><font size=3> MIGRATE FROM POOL 'gpfs23data'</font><br><font size=3> TO POOL 'gpfs23capacity'</font><br><font size=3> LIMIT(98) WHERE ...</font><br><br><font size=3>We scan your files and find and reckon...</font><br><font size=3>[I] Summary of Rule Applicability and File Choices:</font><br><font size=3> Rule# Hit_Cnt
KB_Hit Chosen
KB_Chosen KB_Ill
Rule</font><br><font size=3> 0 5255960
237675081344 1868858 67355430720
0 RULE 'OldStuff'
MIGRATE FROM POOL 'gpfs23data' TO POOL 'gpfs23capacity' LIMIT(98.000000)
WHERE(.)</font><br><br><font size=3>So yes, 5.25Million files match the rule, but the utility
chooses 1.868Million files that add up to 67,355GB and figures that if
it migrates those to gpfs23capacity,</font><br><font size=3>(and also figuring the other migrations by your
second rule)then gpfs23 will end up 97.9999% full.</font><br><font size=3>We show you that with our "predictions" message.</font><br><br><font size=3>Predicted Data Pool Utilization in KB and %:</font><br><font size=3>Pool_Name
KB_Occupied KB_Total Percent_Occupied</font><br><font size=3>gpfs23capacity
122483878944 124983549952 97.999999993%</font><br><font size=3>gpfs23data
104742360032 343753326592 30.470209865%</font><br><br><font size=3>So that's why it chooses to migrate "only" 67GB....</font><br><br><font size=3>See? Makes sense to me.</font><br><br><font size=3>Questions:</font><br><font size=3>Did you run with -I yes or -I defer ?</font><br><br><font size=3>Were some of the files illreplicated or illplaced?</font><br><br><font size=3>Did you give the cluster-wide space reckoning protocols
time to see the changes? mmdf is usually "behind" by some
non-neglible amount of time.</font><br><br><font size=3>What else is going on?</font><br><font size=2 face="sans-serif">If you're moving or deleting
or creating data by other means while mmapplypolicy is running -- it doesn't
"know" about that! </font><br><br><font size=2 face="sans-serif">Run it again!</font><br><br><img align=left src=cid:_1_0F8E5E800F8E5C400069CCD985258104 alt="Marc A Kaplan" style="border:0px solid;"><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">"Buterbaugh, Kevin
L" <Kevin.Buterbaugh@Vanderbilt.Edu></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">gpfsug main discussion
list <gpfsug-discuss@spectrumscale.org></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">04/16/2017 09:47 AM</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">[gpfsug-discuss]
mmapplypolicy didn't migrate everything it should
have - why not?</font><br><font size=1 color=#5f5f5f face="sans-serif">Sent by:
</font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr noshade><br><br><br><font size=3>Hi All, </font><br><br><font size=3>First off, I can open a PMR for this if I need to. Second,
I am far from an mmapplypolicy guru. With that out of the way …
I have an mmapplypolicy job that didn’t migrate anywhere close to what
it could / should have. From the log file I have it create, here
is the part where it shows the policies I told it to invoke:</font><br><br><font size=3>[I] Qos 'maintenance' configured as inf</font><br><font size=3>[I] GPFS Current Data Pool Utilization in KB and %</font><br><font size=3>Pool_Name
KB_Occupied KB_Total Percent_Occupied</font><br><font size=3>gpfs23capacity
55365193728 124983549952 44.297984614%</font><br><font size=3>gpfs23data
166747037696 343753326592 48.507759721%</font><br><font size=3>system
0
0 0.000000000%
(no user data)</font><br><font size=3>[I] 75142046 of 209715200 inodes used: 35.830520%.</font><br><font size=3>[I] Loaded policy rules from /root/gpfs/gpfs23_migration.policy.</font><br><font size=3>Evaluating policy rules with CURRENT_TIMESTAMP = 2017-04-15@01:13:02
UTC</font><br><font size=3>Parsed 2 policy rules.</font><br><br><font size=3>RULE 'OldStuff'</font><br><font size=3> MIGRATE FROM POOL 'gpfs23data'</font><br><font size=3> TO POOL 'gpfs23capacity'</font><br><font size=3> LIMIT(98)</font><br><font size=3> WHERE (((DAYS(CURRENT_TIMESTAMP) - DAYS(ACCESS_TIME))
> 14) AND (KB_ALLOCATED > 3584))</font><br><br><font size=3>RULE 'INeedThatAfterAll'</font><br><font size=3> MIGRATE FROM POOL 'gpfs23capacity'</font><br><font size=3> TO POOL 'gpfs23data'</font><br><font size=3> LIMIT(75)</font><br><font size=3> WHERE ((DAYS(CURRENT_TIMESTAMP) - DAYS(ACCESS_TIME))
< 14)</font><br><br><font size=3>And then the log shows it scanning all the directories
and then says, "OK, here’s what I’m going to do":</font><br><br><font size=3>[I] Summary of Rule Applicability and File Choices:</font><br><font size=3> Rule# Hit_Cnt
KB_Hit Chosen
KB_Chosen KB_Ill
Rule</font><br><font size=3> 0 5255960
237675081344 1868858 67355430720
0 RULE 'OldStuff'
MIGRATE FROM POOL 'gpfs23data' TO POOL 'gpfs23capacity' LIMIT(98.000000)
WHERE(.)</font><br><font size=3> 1 611
236745504
611 236745504
0 RULE 'INeedThatAfterAll' MIGRATE FROM POOL 'gpfs23capacity'
TO POOL 'gpfs23data' LIMIT(75.000000) WHERE(.)</font><br><br><font size=3>[I] Filesystem objects with no applicable rules: 414911602.</font><br><br><font size=3>[I] GPFS Policy Decisions and File Choice Totals:</font><br><font size=3> Chose to migrate 67592176224KB: 1869469 of 5256571
candidates;</font><br><font size=3>Predicted Data Pool Utilization in KB and %:</font><br><font size=3>Pool_Name
KB_Occupied KB_Total Percent_Occupied</font><br><font size=3>gpfs23capacity
122483878944 124983549952 97.999999993%</font><br><font size=3>gpfs23data
104742360032 343753326592 30.470209865%</font><br><font size=3>system
0
0 0.000000000%
(no user data)</font><br><br><font size=3>Notice that it says it’s only going to migrate less than
2 million of the 5.25 million candidate files!! And sure enough,
that’s all it did:</font><br><br><font size=3>[I] A total of 1869469 files have been migrated, deleted
or processed by an EXTERNAL EXEC/script;</font><br><font size=3> 0 'skipped' files and/or errors.</font><br><br><font size=3>And, not surprisingly, the gpfs23capacity pool on gpfs23
is nowhere near 98% full:</font><br><br><font size=3>Disks in storage pool: gpfs23capacity (Maximum disk size
allowed is 519 TB)</font><br><font size=3>eon35Ansd
58.2T 35 No Yes
29.54T ( 51%) 63.93G ( 0%)
</font><br><font size=3>eon35Dnsd
58.2T 35 No Yes
29.54T ( 51%) 64.39G ( 0%)
</font><br><font size=3>
-------------
-------------------- -------------------</font><br><font size=3>(pool total) 116.4T
59.08T ( 51%) 128.3G
( 0%)</font><br><br><font size=3>I don’t understand why it only migrated a small subset
of what it could / should have?</font><br><br><font size=3>We are doing a migration from one filesystem (gpfs21)
to gpfs23 and I really need to stuff my gpfs23capacity pool as full of
data as I can to keep the migration going. Any ideas anyone? Thanks
in advance…</font><br><br><font size=3>—</font><br><font size=3>Kevin Buterbaugh - Senior System Administrator</font><br><font size=3>Vanderbilt University - Advanced Computing Center for
Research and Education</font><br><a href=mailto:Kevin.Buterbaugh@vanderbilt.edu><font size=3 color=blue><u>Kevin.Buterbaugh@vanderbilt.edu</u></font></a><font size=3>- (615)875-9633</font><br><br><br><tt><font size=2>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br></font></tt><br><br><BR>