[gpfsug-discuss] Protocol node recommendations

Jan-Frode Myklebust janfrode at tanso.net
Sat Apr 22 09:50:11 BST 2017


That's a tiny maxFilesToCache...

I would start by implementing the settings from
/usr/lpp/mmfs/*/gpfsprotocolldefaul* plus a 64GB pagepool for your
protocoll nodes, and leave further tuning to when you see you have issues.

Regarding sizing, we have a spreadsheet somewhere where you can input some
workload parameters and get an idea for how many nodes you'll need. Your
node config seems fine, but one node seems too few to serve 1000+ users. We
support max 3000 SMB connections/node, and I believe the recommendation is
4000 NFS connections/node.


-jf
lør. 22. apr. 2017 kl. 08.34 skrev Frank Tower <frank.tower at outlook.com>:

> Hi,
>
> We have here around 2PB GPFS (4.2.2) accessed through an HPC cluster with
> GPFS client on each node.
>
> We will have to open GPFS to all our users over CIFS and kerberized NFS
> with ACL support for both protocol for around +1000 users
>
> All users have different use case and needs:
> - some will do random I/O through a large set of opened files (~5k files)
> - some will do large write with 500GB-1TB files
> - other will arrange sequential I/O with ~10k opened files
>
> NFS and CIFS will share the same server, so I through to use SSD drive, at
> least 128GB memory with 2 sockets.
>
> Regarding tuning parameters, I thought at:
>
> maxFilesToCache 10000
> syncIntervalStrict yes
> workerThreads (8*core)
> prefetchPct 40 (for now and update if needed)
>
> I read the wiki 'Sizing Guidance for Protocol Node', but I was wondering
> if someone could share his experience/best practice regarding hardware
> sizing and/or tuning parameters.
>
> Thank by advance,
> Frank
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170422/d2d0afbe/attachment-0002.htm>


More information about the gpfsug-discuss mailing list