[gpfsug-discuss] Protocol node recommendations

Jan-Frode Myklebust janfrode at tanso.net
Sun Apr 23 11:07:38 BST 2017


The protocol sizing tool should be available from
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Sizing%20Guidance%20for%20Protocol%20Node/version/70a4c7c0-a5c6-4dde-b391-8f91c542dd7d
, but I'm getting 404 now.

I think 128GB should be enough for both protocols on same nodes, and I
think your 3 node suggestion is best. Better load sharing with not
dedicating subset of nodes to each protocol.



-jf
lør. 22. apr. 2017 kl. 21.22 skrev Frank Tower <frank.tower at outlook.com>:

> Hi,
>
>
> Thank for the recommendations.
>
> Now we deal with the situation of:
>
>
> - take 3 nodes with round robin DNS that handle both protocols
>
> - take 4 nodes, split CIFS and NFS, still use round robin DNS for CIFS and
> NFS services.
>
>
> Regarding your recommendations, 256GB memory node could be a plus if we
> mix both protocols for such case.
>
>
> Is the spreadsheet publicly available or do we need to ask IBM ?
>
>
> Thank for your help,
>
> Frank.
>
>
> ------------------------------
> *From:* Jan-Frode Myklebust <janfrode at tanso.net>
> *Sent:* Saturday, April 22, 2017 10:50 AM
> *To:* gpfsug-discuss at spectrumscale.org
> *Subject:* Re: [gpfsug-discuss] Protocol node recommendations
>
> That's a tiny maxFilesToCache...
>
> I would start by implementing the settings from
> /usr/lpp/mmfs/*/gpfsprotocolldefaul* plus a 64GB pagepool for your
> protocoll nodes, and leave further tuning to when you see you have issues.
>
> Regarding sizing, we have a spreadsheet somewhere where you can input some
> workload parameters and get an idea for how many nodes you'll need. Your
> node config seems fine, but one node seems too few to serve 1000+ users. We
> support max 3000 SMB connections/node, and I believe the recommendation is
> 4000 NFS connections/node.
>
>
> -jf
> lør. 22. apr. 2017 kl. 08.34 skrev Frank Tower <frank.tower at outlook.com>:
>
>> Hi,
>>
>> We have here around 2PB GPFS (4.2.2) accessed through an HPC cluster with
>> GPFS client on each node.
>>
>> We will have to open GPFS to all our users over CIFS and kerberized NFS
>> with ACL support for both protocol for around +1000 users
>>
>> All users have different use case and needs:
>> - some will do random I/O through a large set of opened files (~5k files)
>> - some will do large write with 500GB-1TB files
>> - other will arrange sequential I/O with ~10k opened files
>>
>> NFS and CIFS will share the same server, so I through to use SSD
>> drive, at least 128GB memory with 2 sockets.
>>
>> Regarding tuning parameters, I thought at:
>>
>> maxFilesToCache 10000
>> syncIntervalStrict yes
>> workerThreads (8*core)
>> prefetchPct 40 (for now and update if needed)
>>
>> I read the wiki 'Sizing Guidance for Protocol Node', but I was wondering
>> if someone could share his experience/best practice regarding hardware
>> sizing and/or tuning parameters.
>>
>> Thank by advance,
>> Frank
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170423/fdc48a30/attachment-0002.htm>


More information about the gpfsug-discuss mailing list