[gpfsug-discuss] Mass UID migration suggestions

Jez Tucker jtucker at pixitmedia.com
Thu Aug 3 07:46:36 BST 2017


Perhaps IBM might consider letting you commit it to 
https://github.com/gpfsug/gpfsug-tools he says, asking out loud... It'll 
require a friendly IBMer to take the reins up for you.  Scott? :-)

Jez


On 03/08/17 02:00, Aaron Knister wrote:
> I'm a little late to the party here but I thought I'd share our recent 
> experiences.
>
> We recently completed a mass UID number migration (half a billion 
> inodes) and developed two tools ("luke filewalker" and the 
> "mmilleniumfacl") to get the job done. Both luke filewalker and the 
> mmilleniumfacl are based heavily on the code in 
> /usr/lpp/mmfs/samples/util/tsreaddir.c and 
> /usr/lpp/mmfs/samples/util/tsinode.c.
>
> luke filewalker targets traditional POSIX permissions whereas 
> mmilleniumfacl targets posix ACLs. Both tools traverse the filesystem 
> in parallel and both but particularly the 2nd, are extremely I/O 
> intensive on your metadata disks.
>
> The gist of luke filewalker is to scan the inode structures using the 
> gpfs APIs and populate a mapping of inode number to gid and uid 
> number. It then walks the filesystem in parallel using the APIs, looks 
> up the inode number in an in-memory hash, and if appropriate changes 
> ownership using the chown() API.
>
> The mmilleniumfacl doesn't have the luxury of scanning for POSIX ACLs 
> using the GPFS inode API so it walks the filesystem and reads the ACL 
> of any and every file, updating the ACL entries as appropriate.
>
> I'm going to see if I can share the source code for both tools, 
> although I don't know if I can post it here since it modified existing 
> IBM source code. Could someone from IBM chime in here? If I were to 
> send the code to IBM could they publish it perhaps on the wiki?
>
> -Aaron
>
> On 6/30/17 11:20 AM, hpc-luke at uconn.edu wrote:
>> Hello,
>>
>>     We're trying to change most of our users uids, is there a clean 
>> way to
>> migrate all of one users files with say `mmapplypolicy`? We have to 
>> change the
>> owner of around 273539588 files, and my estimates for runtime are 
>> around 6 days.
>>
>>     What we've been doing is indexing all of the files and splitting 
>> them up by
>> owner which takes around an hour, and then we were locking the user 
>> out while we
>> chown their files. I made it multi threaded as it weirdly gave a 10% 
>> speedup
>> despite my expectation that multi threading access from a single node 
>> would not
>> give any speedup.
>>
>>     Generally I'm looking for advice on how to make the chowning 
>> faster. Would
>> spreading the chowning processes over multiple nodes improve 
>> performance? Should
>> I not stat the files before running lchown on them, since lchown 
>> checks the file
>> before changing it? I saw mention of inodescan(), in an old gpfsug 
>> email, which
>> speeds up disk read access, by not guaranteeing that the data is up 
>> to date. We
>> have a maintenance day coming up where all users will be locked out, 
>> so the file
>> handles(?) from GPFS's perspective will not be able to go stale. Is 
>> there a
>> function with similar constraints to inodescan that I can use to 
>> speed up this
>> process?
>>
>> Thank you for your time,
>>
>> Luke
>> Storrs-HPC
>> University of Connecticut
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>

-- 
*Jez Tucker*
Head of Research and Development, Pixit Media
07764193820 | jtucker at pixitmedia.com <mailto:jtucker at pixitmedia.com>
www.pixitmedia.com <http://www.pixitmedia.com> | Tw:@pixitmedia.com 
<https://twitter.com/PixitMedia>

-- 
 <http://pixitmedia.com>
This email is confidential in that it is intended for the exclusive 
attention of the addressee(s) indicated. If you are not the intended 
recipient, this email should not be read or disclosed to any other person. 
Please notify the sender immediately and delete this email from your 
computer system. Any opinions expressed are not necessarily those of the 
company from which this email was sent and, whilst to the best of our 
knowledge no viruses or defects exist, no responsibility can be accepted 
for any loss or damage arising from its receipt or subsequent use of this 
email.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170803/b65c5b4f/attachment-0002.htm>


More information about the gpfsug-discuss mailing list