[gpfsug-discuss] Unbalanced pdisk free space

Dorigo Alvise (PSI) alvise.dorigo at psi.ch
Wed Feb 13 08:30:47 GMT 2019


Thank you, I've understood the math and the focus from free space to used one.
The only thing the remain strange for me is that I've not seen something like this in other systems (IBM ESS GL2 and another Lenovo G240 and G260), but I guess that the reason could be that they have much less used space, and allocated vdisks.

   thanks,

   Alvise
________________________________
From: Sandeep Naik1 [sannaik2 at in.ibm.com]
Sent: Tuesday, February 12, 2019 8:50 PM
To: gpfsug main discussion list; Dorigo Alvise (PSI)
Subject: Re: [gpfsug-discuss] Unbalanced pdisk free space

Hi Alvise,

Here is response to your question in blue.

Q - Can anybody tell me if it is normal that all the pdisks of both my recovery groups, residing on the same physical enclosure have free space equal to (more or less) 1/3 of the free space of the pdisks residing on the other physical enclosure (see attached text files for the command line output) ?

Yes it is normal to see variation in free space between pdisks. The variation should be seen in the context of used space and not free space. GNR try to balance space equally across enclosures (failure groups). One enclosure has one SSD (per RG) so it has 41 disk in DA1 while the other one has 42. Enclosure with 42 disk show 360 GiB free space while one with 41 disk show 120 GiB. If you look at used capacity and distribute it equally between two enclosures you will notice that used capacity is almost same between two enclosure.

42 * (10240 - 360) ≃ 41 * (10240 - 120)

I guess when the least free disks are fully occupied (while the others are still partially free) write performance will drop by a factor of two. Correct ?
Is there a way (considering that the system is in production) to fix (rebalance) this free space among all pdisk of both enclosures ?

You should see in context of size of pdisk, which in your case in 10TB. The disk showing 120GB free is 98% full while the one showing 360GB free is 96% full. This free space is available for creating vdisks and should not be confused with free space available in filesystem. Your pdisk are by and large equally filled so there will be no impact on write performance because of small variation in free space.

Hope this helps

Thanks,

Sandeep Naik
Elastic Storage server / GPFS Test
ETZ-B, Hinjewadi Pune India
(+91) 8600994314



From:        "Dorigo Alvise (PSI)" <alvise.dorigo at psi.ch>
To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:        31/01/2019 04:07 PM
Subject:        Re: [gpfsug-discuss] Unbalanced pdisk free space
Sent by:        gpfsug-discuss-bounces at spectrumscale.org
________________________________



They're attached.

Thanks!

   Alvise

________________________________

From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of IBM Spectrum Scale [scale at us.ibm.com]
Sent: Wednesday, January 30, 2019 9:25 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Unbalanced pdisk free space

Alvise,

Could you send us the output of the following commands from both server nodes.

  *   mmfsadm dump nspdclient > /tmp/dump_nspdclient.<nodeName>
  *   mmfsadm dump pdisk   > /tmp/dump_pdisk.<nodeName>

Regards, The Spectrum Scale (GPFS) team

------------------------------------------------------------------------------------------------------------------
If you feel that your question can benefit other users of  Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479.

If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact  1-800-237-5511 in the United States or your local IBM Service Center in other countries.

The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team.



From:        "Dorigo Alvise (PSI)" <alvise.dorigo at psi.ch>
To:        "gpfsug-discuss at spectrumscale.org" <gpfsug-discuss at spectrumscale.org>
Date:        01/30/2019 08:24 AM
Subject:        [gpfsug-discuss] Unbalanced pdisk free space
Sent by:        gpfsug-discuss-bounces at spectrumscale.org
________________________________



Hello,
I've a Lenovo Spectrum Scale system DSS-G220 (software dss-g-2.0a) composed of
2x x3560 M5 IO server nodes
1x x3550 M5 client/support node
2x disk enclosures D3284
GPFS/GNR 4.2.3-7

Can anybody tell me if it is normal that all the pdisks of both my recovery groups, residing on the same physical enclosure have free space equal to (more or less) 1/3 of the free space of the pdisks residing on the other physical enclosure (see attached text files for the command line output) ?

I guess when the least free disks are fully occupied (while the others are still partially free) write performance will drop by a factor of two. Correct ?
Is there a way (considering that the system is in production) to fix (rebalance) this free space among all pdisk of both enclosures ?

Should I open a PMR to IBM ?

Many thanks,

  Alvise

[attachment "rg1" deleted by Brian Herr/Poughkeepsie/IBM] [attachment "rg2" deleted by Brian Herr/Poughkeepsie/IBM] _______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[attachment "dump_nspdclient.sf-dssio-1" deleted by Sandeep Naik1/India/IBM] [attachment "dump_nspdclient.sf-dssio-2" deleted by Sandeep Naik1/India/IBM] [attachment "dump_pdisk.sf-dssio-1" deleted by Sandeep Naik1/India/IBM] [attachment "dump_pdisk.sf-dssio-2" deleted by Sandeep Naik1/India/IBM] _______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20190213/a6e473e3/attachment-0002.htm>


More information about the gpfsug-discuss mailing list