[gpfsug-discuss] nsd not adding with one quorum node down?
J. Eric Wonderley
eric.wonderley at vt.edu
Thu Jan 5 20:13:28 GMT 2017
Bryan:
Have you ever attempted to do this knowing that one quorum server is down?
*all* nsdservers will not see the nsd about to be added?
How about temporarily removing quorum from a nsd server...?
Thanks
On Thu, Jan 5, 2017 at 3:06 PM, Bryan Banister <bbanister at jumptrading.com>
wrote:
> There may be an issue with one of the other NSDs in the file system
> according to the “mmadddisk: File system home has some disks that are in
> a non-ready state.“ message in our output. Best to check the status of
> the NSDs in the file system using the `mmlsdisk home` and if any disks are
> not ‘up’ then run the `mmchdisk home start -a` command after confirming
> that all nsdservers can see the disks. I typically use `mmdsh -N nsdnodes
> tspreparedisk –s | dshbak –c` for that.
>
>
>
> Hope that helps,
>
> -Bryan
>
>
>
> *From:* gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-
> bounces at spectrumscale.org] *On Behalf Of *J. Eric Wonderley
> *Sent:* Thursday, January 05, 2017 2:01 PM
> *To:* gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> *Subject:* [gpfsug-discuss] nsd not adding with one quorum node down?
>
>
>
> I have one quorum node down and attempting to add a nsd to a fs:
> [root at cl005 ~]# mmadddisk home -F add_1_flh_home -v no |& tee
> /root/adddisk_flh_home.out
> Verifying file system configuration information ...
>
> The following disks of home will be formatted on node cl003:
> r10f1e5: size 1879610 MB
> Extending Allocation Map
> Checking Allocation Map for storage pool fc_ssd400G
> 55 % complete on Thu Jan 5 14:43:31 2017
> Lost connection to file system daemon.
> mmadddisk: tsadddisk failed.
> Verifying file system configuration information ...
> mmadddisk: File system home has some disks that are in a non-ready state.
> mmadddisk: Propagating the cluster configuration data to all
> affected nodes. This is an asynchronous process.
> mmadddisk: Command failed. Examine previous error messages to determine
> cause.
>
> Had to use -v no (this failed once before). Anyhow I next see:
> [root at cl002 ~]# mmgetstate -aL
>
> Node number Node name Quorum Nodes up Total nodes GPFS state
> Remarks
> ------------------------------------------------------------
> ------------------------
> 1 cl001 0 0 8 down
> quorum node
> 2 cl002 5 6 8 active
> quorum node
> 3 cl003 5 0 8 arbitrating
> quorum node
> 4 cl004 5 6 8 active
> quorum node
> 5 cl005 5 6 8 active
> quorum node
> 6 cl006 5 6 8 active
> quorum node
> 7 cl007 5 6 8 active
> quorum node
> 8 cl008 5 6 8 active
> quorum node
> [root at cl002 ~]# mmlsdisk home
> disk driver sector failure holds
> holds storage
> name type size group metadata data status
> availability pool
> ------------ -------- ------ ----------- -------- ----- -------------
> ------------ ------------
> r10f1e5 nsd 512 1001 No Yes allocmap add
> up fc_ssd400G
> r6d2e8 nsd 512 1001 No Yes ready
> up fc_8T
> r6d3e8 nsd 512 1001 No Yes ready
> up fc_8T
>
> Do all quorum node have to be up and participating to do these admin type
> operations?
>
>
>
> ------------------------------
>
> Note: This email is for the confidential use of the named addressee(s)
> only and may contain proprietary, confidential or privileged information.
> If you are not the intended recipient, you are hereby notified that any
> review, dissemination or copying of this email is strictly prohibited, and
> to please notify the sender immediately and destroy this email and any
> attachments. Email transmission cannot be guaranteed to be secure or
> error-free. The Company, therefore, does not make any guarantees as to the
> completeness or accuracy of this email or any attachments. This email is
> for informational purposes only and does not constitute a recommendation,
> offer, request or solicitation of any kind to buy, sell, subscribe, redeem
> or perform any type of transaction of a financial product.
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170105/2712be3b/attachment-0002.htm>
More information about the gpfsug-discuss
mailing list