[gpfsug-discuss] Manager nodes
Uwe Falke
UWEFALKE at de.ibm.com
Tue Jan 24 17:36:22 GMT 2017
Hi, Kevin,
I'd look for more cores on the expense of clock speed. You send data over
routes involving much higher latencies than your CPU-memory combination
has even in the slowest available clock rate, but GPFS with its
multi-threaded appoach is surely happy if it can start a few more threads.
Mit freundlichen Grüßen / Kind regards
Dr. Uwe Falke
IT Specialist
High Performance Computing Services / Integrated Technology Services /
Data Center Services
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland
Rathausstr. 7
09111 Chemnitz
Phone: +49 371 6978 2165
Mobile: +49 175 575 2877
E-Mail: uwefalke at de.ibm.com
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland Business & Technology Services GmbH / Geschäftsführung:
Frank Hammer, Thorsten Moehring
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart,
HRB 17122
From: "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 01/24/2017 04:18 PM
Subject: Re: [gpfsug-discuss] Manager nodes
Sent by: gpfsug-discuss-bounces at spectrumscale.org
Hi Simon,
FWIW, we have two servers dedicated to cluster and filesystem management
functions (and 8 NSD servers). I guess you would describe our cluster as
small to medium sized ? ~700 nodes and a little over 1 PB of storage.
Our two managers have 2 quad core (3 GHz) CPU?s and 64 GB RAM. They?ve
got 10 GbE, but we don?t use IB anywhere. We have an 8 Gb FC SAN and we
do have them connected in to the SAN so that they don?t have to ask the
NSD servers to do any I/O for them.
I do collect statistics on all the servers and plunk them into an RRDtool
database. Looking at the last 30 days the load average on the two
managers is in the 5-10 range. Memory utilization seems to be almost
entirely dependent on how parameters like the pagepool are set on them.
HTHAL?
Kevin
> On Jan 24, 2017, at 4:00 AM, Simon Thompson (Research Computing - IT
Services) <S.J.Thompson at bham.ac.uk> wrote:
>
> We are looking at moving manager processes off our NSD nodes and on to
> dedicated quorum/manager nodes.
>
> Are there some broad recommended hardware specs for the function of
these
> nodes.
>
> I assume they benefit from having high memory (for some value of high,
> probably a function of number of clients, files, expected open files?,
and
> probably completely incalculable, so some empirical evidence may be
useful
> here?) (I'm going to ignore the docs that say you should have twice as
> much swap as RAM!)
>
> What about cores, do they benefit from high core counts or high clock
> rates? For example would I benefit more form a high core count, low
clock
> speed, or going for higher clock speeds and reducing core count? Or is
> memory bandwidth more important for manager nodes?
>
> Connectivity, does token management run over IB or only over
> Ethernet/admin network? I.e. Should I bother adding IB cards, or just
have
> fast Ethernet on them (my clients/NSDs all have IB).
>
> I'm looking for some hints on what I would most benefit in investing in
vs
> keeping to budget.
>
> Thanks
>
> Simon
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
More information about the gpfsug-discuss
mailing list