[gpfsug-discuss] nsd not adding with one quorum node down?

Bryan Banister bbanister at jumptrading.com
Thu Jan 5 20:44:33 GMT 2017


Looking at this further, the output says the “The following disks of home will be formatted on node cl003:“ however that node is the node in ‘arbitrating’ state, so I don’t see how that would work,
-B

From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister
Sent: Thursday, January 05, 2017 2:27 PM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] nsd not adding with one quorum node down?

Removing the quorum designation is an option.

However I believe the file system manager must be assigned to the file system in order for the mmadddisk to work.  If the file system manager is not assigned (mmlsmgr to check) or continuously is reassigned to nodes but that fails (check /var/adm/ras/mmfs.log.latest on all nodes) or is blocked from being assigned due to the apparent node recovery in the cluster indicated by the one node in the ‘arbitrating’ state, then the mmadddisk will not succeed.

-Bryan

From: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org> [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of J. Eric Wonderley
Sent: Thursday, January 05, 2017 2:13 PM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] nsd not adding with one quorum node down?

Bryan:
Have you ever attempted to do this knowing that one quorum server is down?  *all* nsdservers will not see the nsd about to be added?
How about temporarily removing quorum from a nsd server...?

Thanks

On Thu, Jan 5, 2017 at 3:06 PM, Bryan Banister <bbanister at jumptrading.com<mailto:bbanister at jumptrading.com>> wrote:
There may be an issue with one of the other NSDs in the file system according to the “mmadddisk: File system home has some disks that are in a non-ready state.“ message in our output.  Best to check the status of the NSDs in the file system using the `mmlsdisk home` and if any disks are not ‘up’ then run the `mmchdisk home start -a` command after confirming that all nsdservers can see the disks.  I typically use `mmdsh -N nsdnodes tspreparedisk –s | dshbak –c` for that.

Hope that helps,
-Bryan

From: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org> [mailto:gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>] On Behalf Of J. Eric Wonderley
Sent: Thursday, January 05, 2017 2:01 PM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: [gpfsug-discuss] nsd not adding with one quorum node down?

I have one quorum node down and attempting to add a nsd to a fs:
[root at cl005 ~]# mmadddisk home -F add_1_flh_home -v no |& tee /root/adddisk_flh_home.out
Verifying file system configuration information ...

The following disks of home will be formatted on node cl003:
    r10f1e5: size 1879610 MB
Extending Allocation Map
Checking Allocation Map for storage pool fc_ssd400G
  55 % complete on Thu Jan  5 14:43:31 2017
Lost connection to file system daemon.
mmadddisk: tsadddisk failed.
Verifying file system configuration information ...
mmadddisk: File system home has some disks that are in a non-ready state.
mmadddisk: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
mmadddisk: Command failed. Examine previous error messages to determine cause.
Had to use -v no (this failed once before).  Anyhow I next see:
[root at cl002 ~]# mmgetstate -aL

 Node number  Node name       Quorum  Nodes up  Total nodes  GPFS state  Remarks
------------------------------------------------------------------------------------
       1      cl001              0        0          8       down        quorum node
       2      cl002              5        6          8       active      quorum node
       3      cl003              5        0          8       arbitrating quorum node
       4      cl004              5        6          8       active      quorum node
       5      cl005              5        6          8       active      quorum node
       6      cl006              5        6          8       active      quorum node
       7      cl007              5        6          8       active      quorum node
       8      cl008              5        6          8       active      quorum node
[root at cl002 ~]# mmlsdisk home
disk         driver   sector     failure holds    holds                            storage
name         type       size       group metadata data  status        availability pool
------------ -------- ------ ----------- -------- ----- ------------- ------------ ------------
r10f1e5      nsd         512        1001 No       Yes   allocmap add  up           fc_ssd400G
r6d2e8       nsd         512        1001 No       Yes   ready         up           fc_8T
r6d3e8       nsd         512        1001 No       Yes   ready         up           fc_8T
Do all quorum node have to be up and participating to do these admin type operations?


________________________________

Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product.

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


________________________________

Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product.

________________________________

Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170105/8fdddc31/attachment-0002.htm>


More information about the gpfsug-discuss mailing list