[gpfsug-discuss] Recommended pagepool size on clients?

Sven Oehme oehmes at gmail.com
Tue Oct 10 19:00:55 BST 2017


if this is a new cluster and you use reasonable new HW, i probably would
start with just the following settings on the clients :

pagepool=4g,workerThreads=256,maxStatCache=0,maxFilesToCache=256k

depending on what storage you use and what workload you have you may have
to set a couple of other settings too, but that should be a good start.
we plan to make this whole process significant easier in the future, The
Next Major Scale release will eliminate the need for another ~20 parameters
in special cases and we will simplify the communication setup a lot too.
beyond that we started working on introducing tuning suggestions based on
the running system environment but there is no release targeted for that
yet.

Sven


On Tue, Oct 10, 2017 at 1:42 AM John Hearns <john.hearns at asml.com> wrote:

> May I ask how to size pagepool on clients?  Somehow I hear an enormous tin
> can being opened behind me… and what sounds like lots of worms…
>
>
>
> Anyway, I currently have mmhealth reporting gpfs_pagepool_small. Pagepool
> is set to 1024M on clients,
>
> and I now note the documentation says you get this warning when pagepool
> is lower or equal to 1GB
>
> We did do some IOR benchmarking which shows better performance with an
> increased pagepool size.
>
>
>
> I am looking for some rules of thumb for sizing for an 128Gbyte RAM client.
>
> And yup, I know the answer will be ‘depends on your workload’
>
> I agree though that 1024M is too low.
>
>
>
> Illya,kuryakin at uncle.int
> -- The information contained in this communication and any attachments is
> confidential and may be privileged, and is for the sole use of the intended
> recipient(s). Any unauthorized review, use, disclosure or distribution is
> prohibited. Unless explicitly stated otherwise in the body of this
> communication or the attachment thereto (if any), the information is
> provided on an AS-IS basis without any express or implied warranties or
> liabilities. To the extent you are relying on this information, you are
> doing so at your own risk. If you are not the intended recipient, please
> notify the sender immediately by replying to this message and destroy all
> copies of this message and any attachments. Neither the sender nor the
> company/group of companies he or she represents shall be liable for the
> proper and complete transmission of the information contained in this
> communication, or for any delay in its receipt.
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20171010/f1e67658/attachment-0002.htm>


More information about the gpfsug-discuss mailing list