<div dir="auto">Check /var/adm/ras/mmfs.log.latest<div dir="auto">The dmesg xfs bug is probably from boot if you look at the dmesg with -T to show the timestamp</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Jul 4, 2017 12:29 PM, "IBM Spectrum Scale" <<a href="mailto:scale@us.ibm.com">scale@us.ibm.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><font size="2" face="sans-serif">My bad gave the wrong command, the right
one is: mmmount </font><tt><font size="2">fs_gpfs01</font></tt><font size="2" face="sans-serif">-o rs</font><br><br><font size="2" face="sans-serif">Also can you send output of mmlsnsd
-X, need to check device type of the NSDs.</font><br><br><font size="2" face="sans-serif">Are you ok with deleting the file system
and disks and building everything from scratch?</font><br><br><br><font size="2" face="sans-serif">Regards, The Spectrum Scale (GPFS) team<br><br>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------<br>If you feel that your question can benefit other users of Spectrum
Scale (GPFS), then please post it to the public IBM developerWroks Forum
at </font><a href="https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479" target="_blank"><font size="2" face="sans-serif">https://www.ibm.com/<wbr>developerworks/community/<wbr>forums/html/forum?id=11111111-<wbr>0000-0000-0000-000000000479</font></a><font size="2" face="sans-serif">.
<br><br>If your query concerns a potential software error in Spectrum Scale (GPFS)
and you have an IBM software maintenance contract please contact <a href="tel:(800)%20237-5511" value="+18002375511" target="_blank">1-800-237-5511</a>
in the United States or your local IBM Service Center in other countries.
<br><br>The forum is informally monitored as time permits and should not be used
for priority messages to the Spectrum Scale (GPFS) team.</font><br><br><br><br><font size="1" color="#5f5f5f" face="sans-serif">From:
</font><font size="1" face="sans-serif">Ilan Schwarts <<a href="mailto:ilan84@gmail.com" target="_blank">ilan84@gmail.com</a>></font><br><font size="1" color="#5f5f5f" face="sans-serif">To:
</font><font size="1" face="sans-serif">IBM Spectrum Scale
<<a href="mailto:scale@us.ibm.com" target="_blank">scale@us.ibm.com</a>></font><br><font size="1" color="#5f5f5f" face="sans-serif">Cc:
</font><font size="1" face="sans-serif"><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">gpfsug-discuss-bounces@<wbr>spectrumscale.org</a>,
gpfsug main discussion list <<a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">gpfsug-discuss@spectrumscale.<wbr>org</a>></font><br><font size="1" color="#5f5f5f" face="sans-serif">Date:
</font><font size="1" face="sans-serif">07/04/2017 04:26 PM</font><br><font size="1" color="#5f5f5f" face="sans-serif">Subject:
</font><font size="1" face="sans-serif">Re: [gpfsug-discuss]
Fail to mount file system</font><br><hr noshade><br><br><br><tt><font size="2">[root@LH20-GPFS1 ~]# mmmount fs_gpfs01 -a<br>Tue Jul 4 13:52:07 IDT 2017: mmmount: Mounting file systems ...<br>LH20-GPFS1: mount: mount fs_gpfs01 on /fs_gpfs01 failed: Wrong medium
type<br>mmdsh: LH20-GPFS1 remote shell process had return code 32.<br>LH20-GPFS2: mount: mount fs_gpfs01 on /fs_gpfs01 failed: Stale file
handle<br>mmdsh: LH20-GPFS2 remote shell process had return code 32.<br>mmmount: Command failed. Examine previous error messages to determine cause.<br><br>[root@LH20-GPFS1 ~]# mmmount -o rs /fs_gpfs01<br>mmmount: Mount point can not be a relative path name: rs<br>[root@LH20-GPFS1 ~]# mmmount -o rs fs_gpfs01<br>mmmount: Mount point can not be a relative path name: rs<br><br><br><br>I recieve in "dmesg":<br><br>[ 18.338044] sd 2:0:0:1: [sdc] Attached SCSI disk<br>[ 141.363422] hvt_cn_callback: unexpected netlink message!<br>[ 141.366153] hvt_cn_callback: unexpected netlink message!<br>[ 4479.292850] tracedev: loading out-of-tree module taints kernel.<br>[ 4479.292888] tracedev: module verification failed: signature and/or<br>required key missing - tainting kernel<br>[ 4482.928413] ------------[ cut here ]------------<br>[ 4482.928445] WARNING: at fs/xfs/xfs_aops.c:906<br>xfs_do_writepage+0x537/0x550 [xfs]()<br>[ 4482.928446] Modules linked in: mmfs26(OE) mmfslinux(OE)<br>tracedev(OE) iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ext4<br>mbcache jbd2 loop intel_powerclamp iosf_mbi sg pcspkr hv_utils<br>i2c_piix4 i2c_core nfsd auth_rpcgss nfs_acl lockd grace sunrpc<br>binfmt_misc ip_tables xfs libcrc32c sd_mod crc_t10dif<br>crct10dif_generic crct10dif_common ata_generic pata_acpi hv_netvsc<br>hyperv_keyboard hid_hyperv hv_storvsc hyperv_fb serio_raw fjes floppy<br>libata hv_vmbus dm_mirror dm_region_hash dm_log dm_mod<br>[ 4482.928471] CPU: 1 PID: 15210 Comm: mmfsd Tainted: G
OE<br>------------ 3.10.0-514.21.2.el7.x86_64 #1<br><br>On Tue, Jul 4, 2017 at 11:36 AM, IBM Spectrum Scale <<a href="mailto:scale@us.ibm.com" target="_blank">scale@us.ibm.com</a>>
wrote:<br>> What exactly do you mean by "I have received existing corrupted
GPFS 4.2.2<br>> lab"?<br>> Is the file system corrupted ? Maybe this error is then due to file
system<br>> corruption.<br>><br>> Can you once try: mmmount fs_gpfs01 -a<br>> If this does not work then try: mmmount -o rs fs_gpfs01<br>><br>> Let me know which mount is working.<br>><br>> Regards, The Spectrum Scale (GPFS) team<br>><br>> ------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------<br>> If you feel that your question can benefit other users of Spectrum
Scale<br>> (GPFS), then please post it to the public IBM developerWroks Forum
at<br>> </font></tt><a href="https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479" target="_blank"><tt><font size="2">https://www.ibm.com/<wbr>developerworks/community/<wbr>forums/html/forum?id=11111111-<wbr>0000-0000-0000-000000000479</font></tt></a><tt><font size="2">.<br>><br>> If your query concerns a potential software error in Spectrum Scale
(GPFS)<br>> and you have an IBM software maintenance contract please contact<br>> <a href="tel:(800)%20237-5511" value="+18002375511" target="_blank">1-800-237-5511</a> in the United States or your local IBM Service Center
in<br>> other countries.<br>><br>> The forum is informally monitored as time permits and should not be
used for<br>> priority messages to the Spectrum Scale (GPFS) team.<br>><br>><br>><br>> From: Ilan Schwarts <<a href="mailto:ilan84@gmail.com" target="_blank">ilan84@gmail.com</a>><br>> To: <a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">gpfsug-discuss@spectrumscale.<wbr>org</a><br>> Date: 07/04/2017 01:47 PM<br>> Subject: [gpfsug-discuss] Fail to mount
file system<br>> Sent by: <a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">gpfsug-discuss-bounces@<wbr>spectrumscale.org</a><br>> ______________________________<wbr>__<br>><br>><br>><br>> Hi everyone, I have received existing corrupted GPFS 4.2.2 lab and
I<br>> am trying to make it work.<br>> There are 2 nodes in a cluster:<br>> [root@LH20-GPFS1 ~]# mmgetstate -a<br>><br>> Node number Node name GPFS state<br>> ------------------------------<wbr>------------<br>> 1 LH20-GPFS1
active<br>> 3 LH20-GPFS2
active<br>><br>> The Cluster status is:<br>> [root@LH20-GPFS1 ~]# mmlscluster<br>><br>> GPFS cluster information<br>> ========================<br>> GPFS cluster name: MyCluster.LH20-GPFS2<br>> GPFS cluster id: 10777108240438931454<br>> GPFS UID domain: MyCluster.LH20-GPFS2<br>> Remote shell command: /usr/bin/ssh<br>> Remote file copy command: /usr/bin/scp<br>> Repository type: CCR<br>><br>> Node Daemon node name IP address Admin node
name Designation<br>> ------------------------------<wbr>------------------------------<wbr>--------<br>> 1 LH20-GPFS1 10.10.158.61
LH20-GPFS1 quorum-manager<br>> 3 LH20-GPFS2 10.10.158.62
LH20-GPFS2<br>><br>> There is a file system:<br>> [root@LH20-GPFS1 ~]# mmlsnsd<br>><br>> File system Disk name NSD servers<br>> ------------------------------<wbr>------------------------------<wbr>---------------<br>> fs_gpfs01 nynsd1 (directly attached)<br>> fs_gpfs01 nynsd2 (directly attached)<br>><br>> [root@LH20-GPFS1 ~]#<br>><br>> On each Node, There is folder /fs_gpfs01<br>> The next step is to mount this fs_gpfs01 to be synced between the
2 nodes.<br>> Whilte executing mmmount i get exception:<br>> [root@LH20-GPFS1 ~]# mmmount /fs_gpfs01<br>> Tue Jul 4 11:14:18 IDT 2017: mmmount: Mounting file systems
...<br>> mount: mount fs_gpfs01 on /fs_gpfs01 failed: Wrong medium type<br>> mmmount: Command failed. Examine previous error messages to determine
cause.<br>><br>><br>> What am i doing wrong ?<br>> ______________________________<wbr>_________________<br>> gpfsug-discuss mailing list<br>> gpfsug-discuss at <a href="http://spectrumscale.org" target="_blank">spectrumscale.org</a><br>> </font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target="_blank"><tt><font size="2">http://gpfsug.org/mailman/<wbr>listinfo/gpfsug-discuss</font></tt></a><tt><font size="2"><br>><br>><br>><br>><br><br><br><br>-- <br><br><br>-<br>Ilan Schwarts<br><br></font></tt><br><br><br><br>______________________________<wbr>_________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/<wbr>listinfo/gpfsug-discuss</a><br>
<br></blockquote></div></div>