[gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

Laurence Horrocks-Barlow laurence at qsplace.co.uk
Mon Mar 1 08:59:35 GMT 2021


Like Jan, I did some benchmarking a few years ago when the default recommended RG's dropped to 1 per DA to meet rebuild requirements. I couldn't see any discernable difference.

As Achim has also mentioned, I just use vdisks for creating additional filesystems. Unless there is going to be a lot of shuffling of space or future filesystem builds, then I divide the RG's into say 10 vdisks to give some flexibility and granularity

There is also a flag iirc that changes the gpfs magic to consider multiple under lying disks, when I find it again........ Which can provide increased performance on traditional RAID builds.

-- Lauz

On 1 March 2021 08:16:43 GMT, Achim Rehor <Achim.Rehor at de.ibm.com> wrote:
>The reason for having multiple NSDs in legacy NSD (non-GNR) handling is
>
>the increased parallelism, that gives you 'more spindles' and thus more
>
>performance.
>In GNR the drives are used in parallel anyway through the GNRstriping. 
>Therfore, you are using all drives of a ESS/GSS/DSS model under the
>hood 
>in the vdisks anyway. 
>
>The only reason for having more NSDs is for using them for different 
>filesystems. 
>
> 
>Mit freundlichen Grüßen / Kind regards
>
>Achim Rehor
>
>IBM EMEA ESS/Spectrum Scale Support
>
>
>
>
>
>
>
>
>
>
>
>
>gpfsug-discuss-bounces at spectrumscale.org wrote on 01/03/2021 08:58:43:
>
>> From: Jonathan Buzzard <jonathan.buzzard at strath.ac.uk>
>> To: gpfsug-discuss at spectrumscale.org
>> Date: 01/03/2021 08:58
>> Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of 
>NSD's
>> Sent by: gpfsug-discuss-bounces at spectrumscale.org
>> 
>> On 28/02/2021 09:31, Jan-Frode Myklebust wrote:
>> > 
>> > I?ve tried benchmarking many vs. few vdisks per RG, and never could
>
>see 
>> > any performance difference.
>> 
>> That's encouraging.
>> 
>> > 
>> > Usually we create 1 vdisk per enclosure per RG,   thinking this
>will 
>> > allow us to grow with same size vdisks when adding additional 
>enclosures 
>> > in the future.
>> > 
>> > Don?t think mmvdisk can be told to create multiple vdisks per RG 
>> > directly, so you have to manually create multiple vdisk sets each
>with 
>
>> > the apropriate size.
>> > 
>> 
>> Thing is back in the day so GPFS v2.x/v3.x there where strict
>warnings 
>> that you needed a minimum of six NSD's for optimal performance. I
>have 
>> sat in presentations where IBM employees have said so. What we where 
>> told back then is that GPFS needs a minimum number of NSD's in order
>to 
>> be able to spread the I/O's out. So if an NSD is being pounded for
>reads 
>
>> and a write comes in it. can direct it to a less busy NSD.
>> 
>> Now I can imagine that in a ESS/DSS-G that as it's being scattered to
>
>> the winds under the hood this is no longer relevant. But some notes
>to 
>> the effect for us old timers would be nice if that is the case to put
>
>> our minds to rest.
>> 
>> 
>> JAB.
>> 
>> -- 
>> Jonathan A. Buzzard                         Tel: +44141-5483420
>> HPC System Administrator, ARCHIE-WeSt.
>> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> https://urldefense.proofpoint.com/v2/url?
>> 
>u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-
>> siA1ZOg&r=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2-
>> M&m=Mr4A8ROO2t7qFYTfTRM_LoPLllETw72h51FK07dye7Q&s=z6yRHIKsH-
>> IaOjtto4ZyUjFFe0vTGhqzYUiM23rEShg&e= 
>> 
>
>
>_______________________________________________
>gpfsug-discuss mailing list
>gpfsug-discuss at spectrumscale.org
>http://gpfsug.org/mailman/listinfo/gpfsug-discuss

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210301/91665775/attachment-0002.htm>


More information about the gpfsug-discuss mailing list