[gpfsug-discuss] Manager nodes

Bryan Banister bbanister at jumptrading.com
Tue Jan 24 16:53:24 GMT 2017


It goes over IP, and that could be IPoIB if you have the daemon interface or subnets configured that way, but it will go over native IB VERBS if you have rdmaVerbsSend enabled (not recommended for large clusters).

         verbsRdmaSend
                  Enables or disables the use of InfiniBand RDMA
                  rather than TCP for most GPFS daemon-to-daemon
                  communication. When disabled, only data
                  transfers between an NSD client and NSD server
                  are eligible for RDMA. Valid values are
                  enable or disable. The default
                  value is disable. The verbsRdma
                  option must be enabled for verbsRdmaSend
                  to have any effect.

HTH,
-B

-----Original Message-----
From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services)
Sent: Tuesday, January 24, 2017 10:34 AM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Manager nodes

Thanks both. I was thinking of adding 4 (we have a storage cluster over two DC's, so was planning to put two in each and use them as quorum nodes as well plus one floating VM to guarantee only one sitr is quorate in the event of someone cutting a fibre...)

We pretty much start at 128GB ram and go from there, so this sounds fine. Would be good if someone could comment on if token traffic goes via IB or Ethernet, maybe I can save myself a few EDR cards...

Simon
________________________________________
From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Jan-Frode Myklebust [janfrode at tanso.net]
Sent: 24 January 2017 15:51
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Manager nodes

Just some datapoints, in hope that it helps..

I've seen metadata performance improvements by turning down hyperthreading from 8/core to 4/core on Power8. Also it helped distributing the token managers over multiple nodes (6+) instead of fewer.

I would expect this to flow over IP, not IB.




-jf


tir. 24. jan. 2017 kl. 16.18 skrev Buterbaugh, Kevin L <Kevin.Buterbaugh at vanderbilt.edu<mailto:Kevin.Buterbaugh at vanderbilt.edu>>:
Hi Simon,

FWIW, we have two servers dedicated to cluster and filesystem management functions (and 8 NSD servers).  I guess you would describe our cluster as small to medium sized ... ~700 nodes and a little over 1 PB of storage.

Our two managers have 2 quad core (3 GHz) CPU's and 64 GB RAM.  They've got 10 GbE, but we don't use IB anywhere.  We have an 8 Gb FC SAN and we do have them connected in to the SAN so that they don't have to ask the NSD servers to do any I/O for them.

I do collect statistics on all the servers and plunk them into an RRDtool database.  Looking at the last 30 days the load average on the two managers is in the 5-10 range.  Memory utilization seems to be almost entirely dependent on how parameters like the pagepool are set on them.

HTHAL...

Kevin

> On Jan 24, 2017, at 4:00 AM, Simon Thompson (Research Computing - IT Services) <S.J.Thompson at bham.ac.uk<mailto:S.J.Thompson at bham.ac.uk>> wrote:
>
> We are looking at moving manager processes off our NSD nodes and on to
> dedicated quorum/manager nodes.
>
> Are there some broad recommended hardware specs for the function of these
> nodes.
>
> I assume they benefit from having high memory (for some value of high,
> probably a function of number of clients, files, expected open files?, and
> probably completely incalculable, so some empirical evidence may be useful
> here?) (I'm going to ignore the docs that say you should have twice as
> much swap as RAM!)
>
> What about cores, do they benefit from high core counts or high clock
> rates? For example would I benefit more form a high core count, low clock
> speed, or going for higher clock speeds and reducing core count? Or is
> memory bandwidth more important for manager nodes?
>
> Connectivity, does token management run over IB or only over
> Ethernet/admin network? I.e. Should I bother adding IB cards, or just have
> fast Ethernet on them (my clients/NSDs all have IB).
>
> I'm looking for some hints on what I would most benefit in investing in vs
> keeping to budget.
>
> Thanks
>
> Simon
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

________________________________

Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product.



More information about the gpfsug-discuss mailing list