[gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

Achim Rehor Achim.Rehor at de.ibm.com
Mon Mar 1 08:16:43 GMT 2021


The reason for having multiple NSDs in legacy NSD (non-GNR) handling is 
the increased parallelism, that gives you 'more spindles' and thus more 
performance.
In GNR the drives are used in parallel anyway through the GNRstriping. 
Therfore, you are using all drives of a ESS/GSS/DSS model under the hood 
in the vdisks anyway. 

The only reason for having more NSDs is for using them for different 
filesystems. 

 
Mit freundlichen Grüßen / Kind regards

Achim Rehor

IBM EMEA ESS/Spectrum Scale Support












gpfsug-discuss-bounces at spectrumscale.org wrote on 01/03/2021 08:58:43:

> From: Jonathan Buzzard <jonathan.buzzard at strath.ac.uk>
> To: gpfsug-discuss at spectrumscale.org
> Date: 01/03/2021 08:58
> Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of 
NSD's
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> 
> On 28/02/2021 09:31, Jan-Frode Myklebust wrote:
> > 
> > I?ve tried benchmarking many vs. few vdisks per RG, and never could 
see 
> > any performance difference.
> 
> That's encouraging.
> 
> > 
> > Usually we create 1 vdisk per enclosure per RG,   thinking this will 
> > allow us to grow with same size vdisks when adding additional 
enclosures 
> > in the future.
> > 
> > Don?t think mmvdisk can be told to create multiple vdisks per RG 
> > directly, so you have to manually create multiple vdisk sets each with 

> > the apropriate size.
> > 
> 
> Thing is back in the day so GPFS v2.x/v3.x there where strict warnings 
> that you needed a minimum of six NSD's for optimal performance. I have 
> sat in presentations where IBM employees have said so. What we where 
> told back then is that GPFS needs a minimum number of NSD's in order to 
> be able to spread the I/O's out. So if an NSD is being pounded for reads 

> and a write comes in it. can direct it to a less busy NSD.
> 
> Now I can imagine that in a ESS/DSS-G that as it's being scattered to 
> the winds under the hood this is no longer relevant. But some notes to 
> the effect for us old timers would be nice if that is the case to put 
> our minds to rest.
> 
> 
> JAB.
> 
> -- 
> Jonathan A. Buzzard                         Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> https://urldefense.proofpoint.com/v2/url?
> 
u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2-
> M&m=Mr4A8ROO2t7qFYTfTRM_LoPLllETw72h51FK07dye7Q&s=z6yRHIKsH-
> IaOjtto4ZyUjFFe0vTGhqzYUiM23rEShg&e= 
> 





More information about the gpfsug-discuss mailing list