[gpfsug-discuss] GPFS 5 and supported rhel OS

Renata Maria Dart renata at slac.stanford.edu
Thu Feb 20 16:57:47 GMT 2020


Hi Frederick, ours is a physics research lab with a mix of new
eperiments and ongoing research.  While some users embrace and desire
the latest that tech has to offer and are actively writing code to
take advantage of it, we also have users running older code on data
from older experiments which depends on features of older OS releases
and they are often not the ones who wrote the code.  We have a mix of
systems to accomodate both groups.

Renata


On Thu, 20 Feb
2020, Frederick Stock wrote:

>This is a bit off the point of this discussion but it seemed like an appropriate context for me to post this question.  IMHO the state of software is such that
>it is expected to change rather frequently, for example the OS on your laptop/tablet/smartphone and your web browser.  It is correct to say those devices are
>not running an HPC or enterprise environment but I mention them because I expect none of us would think of running those devices on software that is a version
>far from the latest available.  With that as background I am curious to understand why folks would continue to run systems on software like RHEL 6.x which is
>now two major releases(and many years) behind the current version of that product?  Is it simply the effort required to upgrade 100s/1000s of nodes and the
>disruption that causes, or are there other factors that make keeping current with OS releases problematic?  I do understand it is not just a matter of upgrading
>the OS but all the software, like Spectrum Scale, that runs atop that OS in your environment.  While they all do not remain in lock step I would  think that in
>some window of time, say 12-18 months after an OS release, all software in your environment would support a new/recent OS release that would technically permit
>the system to be upgraded.
> 
>I should add that I think you want to be on or near the latest release of any software with the presumption that newer versions should be an improvement over
>older versions, albeit with the usual caveats of new defects.
>
>Fred
>__________________________________________________
>Fred Stock | IBM Pittsburgh Lab | 720-430-8821
>stockf at us.ibm.com
> 
> 
>      ----- Original message -----
>      From: Jonathan Buzzard <jonathan.buzzard at strath.ac.uk>
>      Sent by: gpfsug-discuss-bounces at spectrumscale.org
>      To: "gpfsug-discuss at spectrumscale.org" <gpfsug-discuss at spectrumscale.org>
>      Cc:
>      Subject: [EXTERNAL] Re: [gpfsug-discuss] GPFS 5 and supported rhel OS
>      Date: Thu, Feb 20, 2020 6:24 AM
>        On 20/02/2020 10:41, Simon Thompson wrote:
>      > Well, if you were buying some form of extended Life Support for
>      > Scale, then you might also be expecting to buy extended life for
>      > RedHat. RHEL6 has extended life support until June 2024. Sure its an
>      > add on subscription cost, but some people might be prepared to do
>      > that over OS upgrades.
>
>      I would recommend anyone going down that to route to take a *very* close
>      look at what you get for the extended support. Not all of the OS is
>      supported, with large chunks being moved to unsupported even if you pay
>      for the extended support.
>
>      Consequently extended support is not suitable for HPC usage in my view,
>      so start planning the upgrade now. It's not like you haven't had 10
>      years notice.
>
>      If your GPFS is just a storage thing serving out on protocol nodes,
>      upgrade one node at a time to RHEL7 and then repeat upgrading to GPFS 5.
>      It's a relatively easy invisible to the users upgrade.
>
>      JAB.
>
>      --
>      Jonathan A. Buzzard                         Tel: +44141-5483420
>      HPC System Administrator, ARCHIE-WeSt.
>      University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
>      _______________________________________________
>      gpfsug-discuss mailing list
>      gpfsug-discuss at spectrumscale.org
>      http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
>       
>
> 
>
>
>


More information about the gpfsug-discuss mailing list