[gpfsug-discuss] CES services on an existing GPFS cluster

Simon Thompson (Research Computing - IT Services) S.J.Thompson at bham.ac.uk
Tue Dec 6 08:17:37 GMT 2016


I'm sure we changed this recently, I think all the CES nodes nerd to be down, but I don't think the whole cluster.

We certainly set it for the first tine "live". Maybe I depends on the code version.

Simi 
________________________________________
From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Jan-Frode Myklebust [janfrode at tanso.net]
Sent: 05 December 2016 14:34
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] CES services on an existing GPFS cluster

No, the first time you define it, I'm pretty sure can be done online. But when changing it later, it will require the stopping the full cluster first.


-jf
man. 5. des. 2016 kl. 15.26 skrev Sander Kuusemets <sander.kuusemets at ut.ee<mailto:sander.kuusemets at ut.ee>>:

Hello,

I have been thinking about setting up a CES cluster on my GPFS custer for easier data distribution. The cluster is quite an old one - since 3.4, but we have been doing rolling upgrades on it. 4.2.0 now, ~200 nodes Centos 7, Infiniband interconnected.

The problem is this little line in Spectrum Scale documentation:

The CES shared root directory cannot be changed when the cluster is up and running. If you want to modify the shared root configuration, you must bring the entire cluster down.

Does this mean that even the first time I'm setting CES up, I have to pull down the whole cluster? I would understand this level of service disruption when I already had set the directory before and now I was changing it, but on an initial setup it's quite an inconvenience. Maybe there's a less painful way for this?

Best regards,

--
Sander Kuusemets
University of Tartu, High Performance Computing



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



More information about the gpfsug-discuss mailing list