[gpfsug-discuss] tsgskkm stuck---> what about AMD epyc support in GPFS?

Simon Thompson S.J.Thompson at bham.ac.uk
Fri Sep 4 10:02:29 BST 2020


Of course, you might also be interested in our upcoming Webinar on 22nd September (which I haven't advertised yet):

https://www.spectrumscaleug.org/event/ssugdigital-deep-dive-in-spectrum-scale-core/

... This presentation will discuss selected improvements in Spectrum V5, focusing on improvements for inode management, VCPU scaling and considerations for NUMA.

Simon

On 04/09/2020, 08:56, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Jonathan Buzzard" <gpfsug-discuss-bounces at spectrumscale.org on behalf of jonathan.buzzard at strath.ac.uk> wrote:

    On 02/09/2020 23:28, Andrew Beattie wrote:
    > Giovanni, I have clients in Australia that are running AMD ROME
    > processors in their Visualisation nodes connected to scale 5.0.4
    > clusters with no issues. Spectrum Scale doesn't differentiate between
    > x86 processor technologies -- it only looks at x86_64 (OS support
    > more than anything else) 

    While true bear in mind their are limits on the number of cores that it 
    might be quite easy to pass on a high end multi CPU AMD machine :-)

    See question 5.3

    https://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.pdf

    192 is the largest tested limit for the number of cores and there is a 
    hard limit at 1536 cores.

     From memory these limits are lower in older versions of GPFS.So I think 
    the "tested" limit in 4.2 is 64 cores from memory (or was at the time of 
    release), but works just fine on 80 cores as far as I can tell.

    JAB.

    -- 
    Jonathan A. Buzzard                         Tel: +44141-5483420
    HPC System Administrator, ARCHIE-WeSt.
    University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
    _______________________________________________
    gpfsug-discuss mailing list
    gpfsug-discuss at spectrumscale.org
    http://gpfsug.org/mailman/listinfo/gpfsug-discuss



More information about the gpfsug-discuss mailing list