[gpfsug-discuss] mmdf and maybe other commands long running // influence of n and B on number of regions

Walter Sklenka Walter.Sklenka at EDV-Design.at
Sat Feb 8 11:33:21 GMT 2020


Hello!
We are designing two fs  where we cannot anticipate if there will be 3000, or maybe 5000 or more nodes totally accessing these filesystems
What we saw, was that execution time of mmdf can last 5-7min
We openend a case and they said, that during such commands like mmdf or also mmfsck, mmdefragfs,mmresripefs all regions must be scanned at this is the reason why it takes so long
The technichian also said, that it is "rule of thumb" that there should be
(-n)*32 regions , this would then be enough ( N=5000 --> 160000 regions per pool ?)
(also Block size has influence on regions ?)

#mmfsadm saferdump stripe
Gives the regions number

 storage pools: max 8



     alloc map type 'scatter'



      0: name 'system' Valid nDisks 12 nInUse 12 id 0 poolFlags 0 thinProvision reserved inode -1, reserved nBlocks 0



          regns 170413 segs 1 size 4096 FBlks 0 MBlks 3145728 subblock size 8192





We  also saw when creating the filesystem with a speciicic (-n)  very high (5000)  (where mmdf execution time was some minutes) and then changing (-n) to a lower value this does not influence the behavior any more

My question is: Is the rule (Number of Nodes)x5000 for number of regios in a pool an good estimation ,
Is it better to overestimate the number of Nodes (lnger running commands) or is it unrealistic to get into problems when not reaching the regions number calculated ?

Does  anybody have experience with high number of nodes (>>3000)  and how to design the filesystems for such large clusters ?

Thank you very much in advance !



Mit freundlichen Grüßen
Walter Sklenka
Technical Consultant

EDV-Design Informationstechnologie GmbH
Giefinggasse 6/1/2, A-1210 Wien
Tel: +43 1 29 22 165-31
Fax: +43 1 29 22 165-90
E-Mail: sklenka at edv-design.at<mailto:sklenka at edv-design.at>
Internet: www.edv-design.at<http://www.edv-design.at/>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20200208/c6f87af1/attachment-0001.htm>


More information about the gpfsug-discuss mailing list