[gpfsug-discuss] Fail to mount file system

Ilan Schwarts ilan84 at gmail.com
Tue Jul 4 09:38:28 BST 2017


I mean the person tried to configure it... didnt do good job so now its me
to continue
On Jul 4, 2017 11:37, "IBM Spectrum Scale" <scale at us.ibm.com> wrote:

> What exactly do you mean by "I have received existing corrupted GPFS
> 4.2.2 lab"?
> Is the file system corrupted ? Maybe this error is then due to file system
> corruption.
>
> Can you once try: mmmount fs_gpfs01 -a
> If this does not work then try: mmmount -o rs fs_gpfs01
>
> Let me know which mount is working.
>
> Regards, The Spectrum Scale (GPFS) team
>
> ------------------------------------------------------------
> ------------------------------------------------------
> If you feel that your question can benefit other users of  Spectrum Scale
> (GPFS), then please post it to the public IBM developerWroks Forum at
> https://www.ibm.com/developerworks/community/
> forums/html/forum?id=11111111-0000-0000-0000-000000000479.
>
> If your query concerns a potential software error in Spectrum Scale (GPFS)
> and you have an IBM software maintenance contract please contact
>  1-800-237-5511 in the United States or your local IBM Service Center in
> other countries.
>
> The forum is informally monitored as time permits and should not be used
> for priority messages to the Spectrum Scale (GPFS) team.
>
>
>
> From:        Ilan Schwarts <ilan84 at gmail.com>
> To:        gpfsug-discuss at spectrumscale.org
> Date:        07/04/2017 01:47 PM
> Subject:        [gpfsug-discuss] Fail to mount file system
> Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> ------------------------------
>
>
>
> Hi everyone, I have received existing corrupted GPFS 4.2.2 lab and I
> am trying to make it work.
> There are 2 nodes in a cluster:
> [root at LH20-GPFS1 ~]# mmgetstate -a
>
> Node number  Node name        GPFS state
> ------------------------------------------
>       1      LH20-GPFS1       active
>       3      LH20-GPFS2       active
>
> The Cluster status is:
> [root at LH20-GPFS1 ~]# mmlscluster
>
> GPFS cluster information
> ========================
>  GPFS cluster name:         MyCluster.LH20-GPFS2
>  GPFS cluster id:           10777108240438931454
>  GPFS UID domain:           MyCluster.LH20-GPFS2
>  Remote shell command:      /usr/bin/ssh
>  Remote file copy command:  /usr/bin/scp
>  Repository type:           CCR
>
> Node  Daemon node name  IP address    Admin node name  Designation
> --------------------------------------------------------------------
>   1   LH20-GPFS1        10.10.158.61  LH20-GPFS1       quorum-manager
>   3   LH20-GPFS2        10.10.158.62  LH20-GPFS2
>
> There is a file system:
> [root at LH20-GPFS1 ~]# mmlsnsd
>
> File system   Disk name    NSD servers
> ------------------------------------------------------------
> ---------------
> fs_gpfs01     nynsd1       (directly attached)
> fs_gpfs01     nynsd2       (directly attached)
>
> [root at LH20-GPFS1 ~]#
>
> On each Node, There is folder /fs_gpfs01
> The next step is to mount this fs_gpfs01 to be synced between the 2 nodes.
> Whilte executing mmmount i get exception:
> [root at LH20-GPFS1 ~]# mmmount /fs_gpfs01
> Tue Jul  4 11:14:18 IDT 2017: mmmount: Mounting file systems ...
> mount: mount fs_gpfs01 on /fs_gpfs01 failed: Wrong medium type
> mmmount: Command failed. Examine previous error messages to determine
> cause.
>
>
> What am i doing wrong ?
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170704/9ea80fed/attachment-0002.htm>


More information about the gpfsug-discuss mailing list