[gpfsug-discuss] GPFS 5 and supported rhel OS

Ken Atkinson hpc.ken.tw25qn at gmail.com
Thu Feb 20 16:29:40 GMT 2020


Fred,
It may be that some HPC users "have to"
reverify the results of their computations as being exactly the same as a
previous software stack and that is not a minor task. Any change may
require this verification process.....
Ken Atkjnson

On Thu, 20 Feb 2020, 14:35 Frederick Stock, <stockf at us.ibm.com> wrote:

> This is a bit off the point of this discussion but it seemed like an
> appropriate context for me to post this question.  IMHO the state of
> software is such that it is expected to change rather frequently, for
> example the OS on your laptop/tablet/smartphone and your web browser.  It
> is correct to say those devices are not running an HPC or enterprise
> environment but I mention them because I expect none of us would think of
> running those devices on software that is a version far from the latest
> available.  With that as background I am curious to understand why folks
> would continue to run systems on software like RHEL 6.x which is now two
> major releases(and many years) behind the current version of that product?
> Is it simply the effort required to upgrade 100s/1000s of nodes and the
> disruption that causes, or are there other factors that make keeping
> current with OS releases problematic?  I do understand it is not just a
> matter of upgrading the OS but all the software, like Spectrum Scale, that
> runs atop that OS in your environment.  While they all do not remain in
> lock step I would  think that in some window of time, say 12-18 months
> after an OS release, all software in your environment would support a
> new/recent OS release that would technically permit the system to be
> upgraded.
>
> I should add that I think you want to be on or near the latest release of
> any software with the presumption that newer versions should be an
> improvement over older versions, albeit with the usual caveats of new
> defects.
>
> Fred
> __________________________________________________
> Fred Stock | IBM Pittsburgh Lab | 720-430-8821
> stockf at us.ibm.com
>
>
>
> ----- Original message -----
> From: Jonathan Buzzard <jonathan.buzzard at strath.ac.uk>
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> To: "gpfsug-discuss at spectrumscale.org" <gpfsug-discuss at spectrumscale.org>
> Cc:
> Subject: [EXTERNAL] Re: [gpfsug-discuss] GPFS 5 and supported rhel OS
> Date: Thu, Feb 20, 2020 6:24 AM
>
> On 20/02/2020 10:41, Simon Thompson wrote:
> > Well, if you were buying some form of extended Life Support for
> > Scale, then you might also be expecting to buy extended life for
> > RedHat. RHEL6 has extended life support until June 2024. Sure its an
> > add on subscription cost, but some people might be prepared to do
> > that over OS upgrades.
>
> I would recommend anyone going down that to route to take a *very* close
> look at what you get for the extended support. Not all of the OS is
> supported, with large chunks being moved to unsupported even if you pay
> for the extended support.
>
> Consequently extended support is not suitable for HPC usage in my view,
> so start planning the upgrade now. It's not like you haven't had 10
> years notice.
>
> If your GPFS is just a storage thing serving out on protocol nodes,
> upgrade one node at a time to RHEL7 and then repeat upgrading to GPFS 5.
> It's a relatively easy invisible to the users upgrade.
>
> JAB.
>
> --
> Jonathan A. Buzzard                         Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20200220/c7461e5f/attachment-0002.htm>


More information about the gpfsug-discuss mailing list