[gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

Achim Rehor Achim.Rehor at de.ibm.com
Mon Mar 1 09:46:06 GMT 2021


Correct, there was. 
The OS is dealing with pdisks, while GPFS is striping over Vdisks/NSDs.

For GNR there is a differetnt queuing setup in GPFS, than there was for 
NSDs.
See "mmfsadm dump nsd" and check for NsdQueueTraditional versus 
NsdQueueGNR 

And yes, i was too strict, with 
">     The only reason for having more NSDs is for using them for 
different 
>     filesystems."

there are other management reasons to run with a reasonable number of 
vdisks, just not performance reasons. 

    Mit freundlichen Gruessen / Kind regards

    Achim Rehor

    IBM EMEA ESS/Spectrum Scale Support


gpfsug-discuss-bounces at spectrumscale.org wrote on 01/03/2021 10:06:07:

> From: Simon Thompson <S.J.Thompson at bham.ac.uk>
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Date: 01/03/2021 10:06
> Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of 
NSD's
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> 
> Or for hedging your bets about how you might want to use it in future.
> 
> We are never quite sure if we want to do something different in the 
> future with some of the storage, sure that might mean we want to 
> steal some space from a file-system, but that is perfectly valid. 
> And we have done this, both in temporary transient states (data 
> migration between systems), or permanently (found we needed 
> something on a separate file-system)
> 
> So yes whilst there might be no performance impact on doing this, 
westill do.
> 
> I vaguely recall some of the old reasoning was around IO queues in 
> the OS, i.e. if you had 6 NSDs vs 16 NSDs attached to the NSD 
> server, you have 16 IO queues passing to multipath, which can help 
> keep the data pipes full. I suspect there was some optimal number of
> NSDs for different storage controllers, but I don't know if anyone 
> ever benchmarked that.
> 
> Simon
> 
> On 01/03/2021, 08:16, "gpfsug-discuss-bounces at spectrumscale.org on 
> behalf of Achim.Rehor at de.ibm.com" <gpfsug-discuss-
> bounces at spectrumscale.org on behalf of Achim.Rehor at de.ibm.com> wrote:
> 
>     The reason for having multiple NSDs in legacy NSD (non-GNR) handling 
is 
>     the increased parallelism, that gives you 'more spindles' and thus 
more 
>     performance.
>     In GNR the drives are used in parallel anyway through the 
GNRstriping. 
>     Therfore, you are using all drives of a ESS/GSS/DSS model under the 
hood 
>     in the vdisks anyway. 
> 
>     The only reason for having more NSDs is for using them for different 

>     filesystems. 
> 
> 
>     Mit freundlichen Grüßen / Kind regards
> 
>     Achim Rehor
> 
>     IBM EMEA ESS/Spectrum Scale Support
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>     gpfsug-discuss-bounces at spectrumscale.org wrote on 01/03/2021 
08:58:43:
> 
>     > From: Jonathan Buzzard <jonathan.buzzard at strath.ac.uk>
>     > To: gpfsug-discuss at spectrumscale.org
>     > Date: 01/03/2021 08:58
>     > Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number 
of 
>     NSD's
>     > Sent by: gpfsug-discuss-bounces at spectrumscale.org
>     > 
>     > On 28/02/2021 09:31, Jan-Frode Myklebust wrote:
>     > > 
>     > > I?ve tried benchmarking many vs. few vdisks per RG, and never 
could 
>     see 
>     > > any performance difference.
>     > 
>     > That's encouraging.
>     > 
>     > > 
>     > > Usually we create 1 vdisk per enclosure per RG,   thinking this 
will 
>     > > allow us to grow with same size vdisks when adding additional 
>     enclosures 
>     > > in the future.
>     > > 
>     > > Don?t think mmvdisk can be told to create multiple vdisks per RG 

>     > > directly, so you have to manually create multiple vdisk setseach 
with 
> 
>     > > the apropriate size.
>     > > 
>     > 
>     > Thing is back in the day so GPFS v2.x/v3.x there where strict 
warnings 
>     > that you needed a minimum of six NSD's for optimal performance. I 
have 
>     > sat in presentations where IBM employees have said so. What we 
where 
>     > told back then is that GPFS needs a minimum number of NSD's 
inorder to 
>     > be able to spread the I/O's out. So if an NSD is being poundedfor 
reads 
> 
>     > and a write comes in it. can direct it to a less busy NSD.
>     > 
>     > Now I can imagine that in a ESS/DSS-G that as it's being scattered 
to 
>     > the winds under the hood this is no longer relevant. But some 
notes to 
>     > the effect for us old timers would be nice if that is the case to 
put 
>     > our minds to rest.
>     > 
>     > 
>     > JAB.
>     > 
>     > -- 
>     > Jonathan A. Buzzard                         Tel: +44141-5483420
>     > HPC System Administrator, ARCHIE-WeSt.
>     > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
>     > _______________________________________________
>     > gpfsug-discuss mailing list
>     > gpfsug-discuss at spectrumscale.org
>     > https://urldefense.proofpoint.com/v2/url?
>     > 
> 
> 
u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-
>     > siA1ZOg&r=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2-
>     > M&m=Mr4A8ROO2t7qFYTfTRM_LoPLllETw72h51FK07dye7Q&s=z6yRHIKsH-
>     > IaOjtto4ZyUjFFe0vTGhqzYUiM23rEShg&e= 
>     > 
> 
> 
>     _______________________________________________
>     gpfsug-discuss mailing list
>     gpfsug-discuss at spectrumscale.org
>     https://urldefense.proofpoint.com/v2/url?
> 
u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2-
> M&m=gU9xf_Z6rrdOa4-
> 
WKodSyFPbnGGbAGC_LK7hgYPB3yQ&s=L_VtTqSwQbqfIR5VVmn6mYxmidgnH37osHrFPX0E-Ck&e=
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> https://urldefense.proofpoint.com/v2/url?
> 
u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2-
> M&m=gU9xf_Z6rrdOa4-
> 
WKodSyFPbnGGbAGC_LK7hgYPB3yQ&s=L_VtTqSwQbqfIR5VVmn6mYxmidgnH37osHrFPX0E-Ck&e=
> 





More information about the gpfsug-discuss mailing list