[gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

Jan-Frode Myklebust janfrode at tanso.net
Sun Feb 28 09:31:57 GMT 2021


I’ve tried benchmarking many vs. few vdisks per RG, and never could see any
performance difference.

Usually we create 1 vdisk per enclosure per RG,   thinking this will allow
us to grow with same size vdisks when adding additional enclosures in the
future.

Don’t think mmvdisk can be told to create multiple vdisks per RG directly,
so you have to manually create multiple vdisk sets each with the apropriate
size.



  -jf

lør. 27. feb. 2021 kl. 19:01 skrev Jonathan Buzzard <
jonathan.buzzard at strath.ac.uk>:

>
> Doing an upgrade on our storage which involved replacing all the 4TB
> disks with 16TB disks. Some hiccups with five of the disks being dead
> when inserted but that is all sorted.
>
> So the system was originally installed with DSS-G 2.0a so with "legacy"
> commands for vdisks etc. We had 10 metadata NSD's and 10 data NSD's per
> draw aka recovery group of the D3284 enclosures.
>
> The dssgmkfs.mmvdisk has created exactly one data and one metadata NSD
> per draw of a DS3284 leading to a really small number of NSD's in the
> file system.
>
> All my instincts tell me that this is going to lead to horrible
> performance on the file system. Historically you wanted a reasonable
> number of NSD's in a system for decent performance.
>
> Taking what the ddsgmkfs.mmvdisk has give me even with a DSS-G260 you
> would get only 12 NSD's of each type, which for a potentially ~5PB file
> system seems on the really low side to me.
>
> Is there any way to tell ddsgmkfs.mmvdisk to create more NSD's than the
> one per recovery group or is this no longer relevant and performance
> with really low numbers of NSD's is fine these days?
>
>
> JAB.
>
> --
> Jonathan A. Buzzard                         Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210228/8568c939/attachment-0002.htm>


More information about the gpfsug-discuss mailing list