[gpfsug-discuss] Contents of gpfsug-discuss Digest, Vol 107, Issue 13

Andrew Beattie abeattie at au1.ibm.com
Thu Dec 10 21:59:04 GMT 2020


Thanks Ed,

The UQ team are well aware of the current limits published in the FAQ.

However the issue is not the number of physical nodes or the concurrent user sessions, but rather the number of SMB / NFS export mounts that Spectrum Scale supports from a single cluster or even remote mount protocol clusters is no longer enough for their research environment.

The current total number of Exports can not exceed 1000, which is an issue when they have multiple thousands of research project ID’s with users needing access to every project ID with its relevant security permissions.

Grouping Project ID’s under a single export isn’t a viable option as there is no simple way to identify which research group / user is going to request a new project ID, new project ID’s are automatically created and allocated when a request for storage allocation is fulfilled.

Projects ID’s (independent file sets) are published not only as SMB exports, but are also mounted using multiple AFM cache clusters to high performance instrument clusters, multiple HPC clusters or up to 5 different campus access points, including remote universities.

The data workflow is not a simple linear workflow
And the mixture of different types of users with requests for storage, and storage provisioning has resulted in the University creating their own provisioning portal which interacts with the Spectrum Scale data fabric (multiple Spectrum Scale clusters in single global namespace, connected via 100GB Ethernet over AFM) in multiple points to deliver the project ID provisioning at the relevant locations specified by the user / research group.

One point of data surfacing, in this data fabric, is the Spectrum Scale Protocols cluster that Les manages, which provides the central user access point via SMB or NFS, all research users across the university who want to access one or more of their storage allocations do so via the SMB / NFS mount points from this specific storage cluster.

Regards,


Andrew Beattie
File & Object Storage - Technical Lead
IBM Australia & New Zealand

Sent from my iPhone

> On 11 Dec 2020, at 00:41, Edward Boyd <eboyd at us.ibm.com> wrote:
> 
> 
> Please review the CES limits in the FAQ which states
> 
> Q5.2:
> What are some scaling considerations for the protocols function?
> A5.2:
> Scaling considerations for the protocols function include:
> The number of protocol nodes.
> If you are using SMB in any combination of other protocols you can configure only up to 16 protocol nodes. This is a hard limit and SMB cannot be enabled if there are more protocol nodes. If only NFS and Object are enabled, you can have 32 nodes configured as protocol nodes.
> 
> The number of client connections.
> A maximum of 3,000 SMB connections is recommended per protocol node with a maximum of 20,000 SMB connections per cluster. A maximum of 4,000 NFS connections per protocol node is recommended. A maximum of 2,000 Object connections per protocol nodes is recommended. The maximum number of connections depends on the amount of memory configured and sufficient CPU. We recommend a minimum of 64GB of memory for only Object or only NFS use cases. If you have multiple protocols enabled or if you have SMB enabled we recommend 128GB of memory on the system.
> 
> https://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html?view=kc#maxproto
> Edward L. Boyd ( Ed )
> IBM Certified Client Technical Specialist, Level 2 Expert
> Open Foundation, Master Certified Technical Specialist
> IBM Systems, Storage Solutions
> US Federal
> 407-271-9210 Office / Cell / Office / Text
> eboyd at us.ibm.com email
> 
> -----gpfsug-discuss-bounces at spectrumscale.org wrote: -----
> To: gpfsug-discuss at spectrumscale.org
> From: gpfsug-discuss-request at spectrumscale.org
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> Date: 12/10/2020 07:00AM
> Subject: [EXTERNAL] gpfsug-discuss Digest, Vol 107, Issue 13
> 
> Send gpfsug-discuss mailing list submissions to
> gpfsug-discuss at spectrumscale.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
> or, via email, send a message with subject or body 'help' to
> gpfsug-discuss-request at spectrumscale.org
> 
> You can reach the person managing the list at
> gpfsug-discuss-owner at spectrumscale.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gpfsug-discuss digest..."
> 
> 
> Today's Topics:
> 
>    1. Protocol limits (leslie elliott)
>    2. Re: Protocol limits (Jan-Frode Myklebust)
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Thu, 10 Dec 2020 08:45:22 +1000
> From: leslie elliott <leslie.james.elliott at gmail.com>
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Subject: [gpfsug-discuss] Protocol limits
> Message-ID:
> <CANBv+tsnwzTH5796xMfpLmWc-aY5=kiHHLaacx-fzGdBLuPqgw at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
> 
> hi all
> 
> we run a large number of shares from CES servers connected to a single
> scale cluster
> we understand the current supported limit is 1000 SMB shares, we run the
> same number of NFS shares
> 
> we also understand that using external CES cluster to increase that limit
> is not supported based on the documentation, we use the same authentication
> for all shares, we do have additional use cases for sharing where this
> pathway would be attractive going forward
> 
> so the question becomes if we need to run 20000 SMB and NFS shares off a
> scale cluster is there any hardware design we can use to do this whilst
> maintaining support
> 
> I have submitted a support request to ask if this can be done but thought I
> would ask the collective good if this has already been solved
> 
> thanks
> 
> leslie
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20201210/a460a862/attachment-0001.html >
> 
> ------------------------------
> 
> Message: 2
> Date: Thu, 10 Dec 2020 00:21:03 +0100
> From: Jan-Frode Myklebust <janfrode at tanso.net>
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Subject: Re: [gpfsug-discuss] Protocol limits
> Message-ID:
> <CAHwPatj8xi5Bez7M+GpqAGuOXy_P+qW87MJ4UF7Z2NxR1aeHhQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
> 
> My understanding of these limits are that they are to limit the
> configuration files from becoming too large, which makes
> changing/processing them somewhat slow.
> 
> For SMB shares, you might be able to limit the number of configured shares
> by using wildcards in the config (%U). These wildcarded entries counts as
> one share.. Don?t know if simimar tricks can be done for NFS..
> 
> 
> 
>   -jf
> 
> ons. 9. des. 2020 kl. 23:45 skrev leslie elliott <
> leslie.james.elliott at gmail.com>:
> 
> >
> > hi all
> >
> > we run a large number of shares from CES servers connected to a single
> > scale cluster
> > we understand the current supported limit is 1000 SMB shares, we run the
> > same number of NFS shares
> >
> > we also understand that using external CES cluster to increase that limit
> > is not supported based on the documentation, we use the same authentication
> > for all shares, we do have additional use cases for sharing where this
> > pathway would be attractive going forward
> >
> > so the question becomes if we need to run 20000 SMB and NFS shares off a
> > scale cluster is there any hardware design we can use to do this whilst
> > maintaining support
> >
> > I have submitted a support request to ask if this can be done but thought
> > I would ask the collective good if this has already been solved
> >
> > thanks
> >
> > leslie
> > _______________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20201210/4744cdc0/attachment-0001.html >
> 
> ------------------------------
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
> 
> 
> End of gpfsug-discuss Digest, Vol 107, Issue 13
> ***********************************************
> 
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20201210/536ba088/attachment-0002.htm>


More information about the gpfsug-discuss mailing list