[gpfsug-discuss] Policy scan against billion files for ILM/HSM

Bryan Banister bbanister at jumptrading.com
Tue Apr 11 16:29:25 BST 2017


A word of caution, be careful about where you run this kind of policy scan as the sort process can consume all memory on your hosts and that could lead to issues with the OS deciding to kill off GPFS or other similar bad things can occur.  I recommend restricting the ILM policy scan to a subset of servers, no quorum nodes, and ensuring at least one NSD server is available for all NSDs in the file system(s).  Watch the memory consumption on your nodes during the sort operations to see if you need to tune that down in the mmapplypolicy options.

Hope that helps,
-Bryan

From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Frederick Stock
Sent: Tuesday, April 11, 2017 6:54 AM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Policy scan against billion files for ILM/HSM

As Zachary noted the location of your metadata is the key and for the scanning you have planned flash is necessary.  If you have the resources you may consider setting up your flash in a mirrored RAID configuration (RAID1/RAID10) and have GPFS only keep one copy of metadata since the underlying storage is replicating it via the RAID.  This should improve metadata write performance but likely has little impact on your scanning, assuming you are just reading through the metadata.

Fred
__________________________________________________
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
stockf at us.ibm.com<mailto:stockf at us.ibm.com>



From:        Zachary Giles <zgiles at gmail.com<mailto:zgiles at gmail.com>>
To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date:        04/11/2017 12:49 AM
Subject:        Re: [gpfsug-discuss] Policy scan against billion files for ILM/HSM
Sent by:        gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>
________________________________



It's definitely doable, and these days not too hard. Flash for
metadata is the key.
The basics of it are:
* Latest GPFS for performance benefits.
* A few 10's of TBs of flash ( or more ! ) setup in a good design..
lots of SAS, well balanced RAID that can consume the flash fully,
tuned for IOPs, and available in parallel from multiple servers.
* Tune up mmapplypolicy with -g somewhere-on-gpfs; --choice-algorithm
fast; -a, -m and -n to reasonable values ( number of cores on the
servers ); -A to ~1000
* Test first on a smaller fileset to confirm you like it. -I test
should work well and be around the same speed minus the migration
phase.
* Then throw ~8 well tuned Infiniband attached nodes at it using -N,
If they're the same as the NSD servers serving the flash, even better.

Should be able to do 1B in 5-30m depending on the idiosyncrasies of
above choices. Even 60m isn't bad and quite respectable if less gear
is used or if they system is busy while the policy is running.
Parallel metadata, it's a beautiful thing.



On Tue, Apr 11, 2017 at 12:29 AM, Masanori Mitsugi
<mitsugi at linux.vnet.ibm.com<mailto:mitsugi at linux.vnet.ibm.com>> wrote:
> Hello,
>
> Does anyone have experience to do mmapplypolicy against billion files for
> ILM/HSM?
>
> Currently I'm planning/designing
>
> * 1 Scale filesystem (5-10 PB)
> * 10-20 filesets which includes 1 billion files each
>
> And our biggest concern is "How log does it take for mmapplypolicy policy
> scan against billion files?"
>
> I know it depends on how to write the policy,
> but I don't have no billion files policy scan experience,
> so I'd like to know the order of time (min/hour/day...).
>
> It would be helpful if anyone has experience of such large number of files
> scan and let me know any considerations or points for policy design.
>
> --
> Masanori Mitsugi
> mitsugi at linux.vnet.ibm.com<mailto:mitsugi at linux.vnet.ibm.com>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss



--
Zach Giles
zgiles at gmail.com<mailto:zgiles at gmail.com>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




________________________________

Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170411/27c1a146/attachment-0002.htm>


More information about the gpfsug-discuss mailing list