[gpfsug-discuss] Fail to mount file system

Ilan Schwarts ilan84 at gmail.com
Tue Jul 4 17:46:17 BST 2017


Yes I am ok with deleting. I follow a guide from john olsen at the ibm team
from tuscon.. but the guide had steps after the gpfs setup... Is there step
by step guide for gpfs cluster setup other than the one in the ibm site?
Thank
My bad gave the wrong command, the right one is: mmmount fs_gpfs01-o rs

Also can you send output of mmlsnsd -X, need to check device type of the
NSDs.

Are you ok with deleting the file system and disks and building everything
from scratch?


Regards, The Spectrum Scale (GPFS) team

------------------------------------------------------------
------------------------------------------------------
If you feel that your question can benefit other users of  Spectrum Scale
(GPFS), then please post it to the public IBM developerWroks Forum at
https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-
0000-0000-0000-000000000479.

If your query concerns a potential software error in Spectrum Scale (GPFS)
and you have an IBM software maintenance contract please contact
 1-800-237-5511 in the United States or your local IBM Service Center in
other countries.

The forum is informally monitored as time permits and should not be used
for priority messages to the Spectrum Scale (GPFS) team.



From:        Ilan Schwarts <ilan84 at gmail.com>
To:        IBM Spectrum Scale <scale at us.ibm.com>
Cc:        gpfsug-discuss-bounces at spectrumscale.org, gpfsug main discussion
list <gpfsug-discuss at spectrumscale.org>
Date:        07/04/2017 04:26 PM
Subject:        Re: [gpfsug-discuss] Fail to mount file system
------------------------------



[root at LH20-GPFS1 ~]# mmmount fs_gpfs01 -a
Tue Jul  4 13:52:07 IDT 2017: mmmount: Mounting file systems ...
LH20-GPFS1:  mount: mount fs_gpfs01 on /fs_gpfs01 failed: Wrong medium type
mmdsh: LH20-GPFS1 remote shell process had return code 32.
LH20-GPFS2:  mount: mount fs_gpfs01 on /fs_gpfs01 failed: Stale file handle
mmdsh: LH20-GPFS2 remote shell process had return code 32.
mmmount: Command failed. Examine previous error messages to determine cause.

[root at LH20-GPFS1 ~]# mmmount -o rs /fs_gpfs01
mmmount: Mount point can not be a relative path name: rs
[root at LH20-GPFS1 ~]# mmmount -o rs fs_gpfs01
mmmount: Mount point can not be a relative path name: rs



I recieve in "dmesg":

[   18.338044] sd 2:0:0:1: [sdc] Attached SCSI disk
[  141.363422] hvt_cn_callback: unexpected netlink message!
[  141.366153] hvt_cn_callback: unexpected netlink message!
[ 4479.292850] tracedev: loading out-of-tree module taints kernel.
[ 4479.292888] tracedev: module verification failed: signature and/or
required key missing - tainting kernel
[ 4482.928413] ------------[ cut here ]------------
[ 4482.928445] WARNING: at fs/xfs/xfs_aops.c:906
xfs_do_writepage+0x537/0x550 [xfs]()
[ 4482.928446] Modules linked in: mmfs26(OE) mmfslinux(OE)
tracedev(OE) iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ext4
mbcache jbd2 loop intel_powerclamp iosf_mbi sg pcspkr hv_utils
i2c_piix4 i2c_core nfsd auth_rpcgss nfs_acl lockd grace sunrpc
binfmt_misc ip_tables xfs libcrc32c sd_mod crc_t10dif
crct10dif_generic crct10dif_common ata_generic pata_acpi hv_netvsc
hyperv_keyboard hid_hyperv hv_storvsc hyperv_fb serio_raw fjes floppy
libata hv_vmbus dm_mirror dm_region_hash dm_log dm_mod
[ 4482.928471] CPU: 1 PID: 15210 Comm: mmfsd Tainted: G           OE
------------   3.10.0-514.21.2.el7.x86_64 #1

On Tue, Jul 4, 2017 at 11:36 AM, IBM Spectrum Scale <scale at us.ibm.com>
wrote:
> What exactly do you mean by "I have received existing corrupted GPFS 4.2.2
> lab"?
> Is the file system corrupted ? Maybe this error is then due to file system
> corruption.
>
> Can you once try: mmmount fs_gpfs01 -a
> If this does not work then try: mmmount -o rs fs_gpfs01
>
> Let me know which mount is working.
>
> Regards, The Spectrum Scale (GPFS) team
>
> ------------------------------------------------------------
------------------------------------------------------
> If you feel that your question can benefit other users of  Spectrum Scale
> (GPFS), then please post it to the public IBM developerWroks Forum at
> https://www.ibm.com/developerworks/community/
forums/html/forum?id=11111111-0000-0000-0000-000000000479.
>
> If your query concerns a potential software error in Spectrum Scale (GPFS)
> and you have an IBM software maintenance contract please contact
> 1-800-237-5511 in the United States or your local IBM Service Center in
> other countries.
>
> The forum is informally monitored as time permits and should not be used
for
> priority messages to the Spectrum Scale (GPFS) team.
>
>
>
> From:        Ilan Schwarts <ilan84 at gmail.com>
> To:        gpfsug-discuss at spectrumscale.org
> Date:        07/04/2017 01:47 PM
> Subject:        [gpfsug-discuss] Fail to mount file system
> Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> ________________________________
>
>
>
> Hi everyone, I have received existing corrupted GPFS 4.2.2 lab and I
> am trying to make it work.
> There are 2 nodes in a cluster:
> [root at LH20-GPFS1 ~]# mmgetstate -a
>
> Node number  Node name        GPFS state
> ------------------------------------------
>       1      LH20-GPFS1       active
>       3      LH20-GPFS2       active
>
> The Cluster status is:
> [root at LH20-GPFS1 ~]# mmlscluster
>
> GPFS cluster information
> ========================
>  GPFS cluster name:         MyCluster.LH20-GPFS2
>  GPFS cluster id:           10777108240438931454
>  GPFS UID domain:           MyCluster.LH20-GPFS2
>  Remote shell command:      /usr/bin/ssh
>  Remote file copy command:  /usr/bin/scp
>  Repository type:           CCR
>
> Node  Daemon node name  IP address    Admin node name  Designation
> --------------------------------------------------------------------
>   1   LH20-GPFS1        10.10.158.61  LH20-GPFS1       quorum-manager
>   3   LH20-GPFS2        10.10.158.62  LH20-GPFS2
>
> There is a file system:
> [root at LH20-GPFS1 ~]# mmlsnsd
>
> File system   Disk name    NSD servers
> ------------------------------------------------------------
---------------
> fs_gpfs01     nynsd1       (directly attached)
> fs_gpfs01     nynsd2       (directly attached)
>
> [root at LH20-GPFS1 ~]#
>
> On each Node, There is folder /fs_gpfs01
> The next step is to mount this fs_gpfs01 to be synced between the 2 nodes.
> Whilte executing mmmount i get exception:
> [root at LH20-GPFS1 ~]# mmmount /fs_gpfs01
> Tue Jul  4 11:14:18 IDT 2017: mmmount: Mounting file systems ...
> mount: mount fs_gpfs01 on /fs_gpfs01 failed: Wrong medium type
> mmmount: Command failed. Examine previous error messages to determine
cause.
>
>
> What am i doing wrong ?
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
>



-- 


-
Ilan Schwarts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170704/08609b54/attachment-0002.htm>


More information about the gpfsug-discuss mailing list