[gpfsug-discuss] LROC

Matt Weil mweil at wustl.edu
Wed Dec 28 19:57:18 GMT 2016


no I will do that next.


On 12/28/16 1:55 PM, Sven Oehme wrote:
> Did you restart the daemon on that node after you fixed it ? Sent from
> IBM Verse
>
> Matt Weil --- Re: [gpfsug-discuss] LROC ---
>
> From: 	"Matt Weil" <mweil at wustl.edu>
> To: 	gpfsug-discuss at spectrumscale.org
> Date: 	Wed, Dec 28, 2016 8:52 PM
> Subject: 	Re: [gpfsug-discuss] LROC
>
> ------------------------------------------------------------------------
>
> k got that fixed now shows as status shutdown
>
>> [root at ces1 ~]# mmdiag --lroc
>>
>> === mmdiag: lroc ===
>> LROC Device(s):
>> '0A6403AA58641546#/dev/disk/by-id/nvme-Dell_Express_Flash_NVMe_SM1715_1.6TB_SFF_______S29GNYAH200016;'
>> status Shutdown
>> Cache inodes 1 dirs 1 data 1  Config: maxFile 1073741824 stubFile
>> 1073741824
>> Max capacity: 0 MB, currently in use: 0 MB
>> Statistics from: Wed Dec 28 13:49:27 2016
>
>
>
> On 12/28/16 1:06 PM, Sven Oehme wrote:
>
> you have no device configured that's why it doesn't show any stats :
>
> >>> LROC Device(s): 'NULL' status Idle
>
> run mmsnsd -X to see if gpfs can see the path to the device. most
> likely it doesn't show up there and you need to adjust your nsddevices
> list to include it , especially if it is a NVME device.
>
> sven
>
>
> ------------------------------------------
> Sven Oehme
> Scalable Storage Research
> email: oehmes at us.ibm.com
> Phone: +1 (408) 824-8904
> IBM Almaden Research Lab
> ------------------------------------------
>
> Inactive hide details for Matt Weil ---12/28/2016 07:02:57 PM---So I
> have minReleaseLevel 4.1.1.0 Is that to old? On 12/28/16 1Matt Weil
> ---12/28/2016 07:02:57 PM---So I have minReleaseLevel 4.1.1.0 Is that
> to old? On 12/28/16 11:50 AM, Aaron Knister wrote:
>
> From: Matt Weil <mweil at wustl.edu>
> To: <gpfsug-discuss at spectrumscale.org>
> Date: 12/28/2016 07:02 PM
> Subject: Re: [gpfsug-discuss] LROC
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
>
> ------------------------------------------------------------------------
>
>
>
> So I have minReleaseLevel 4.1.1.0 Is that to old?
>
>
> On 12/28/16 11:50 AM, Aaron Knister wrote:
> > Hey Matt,
> >
> > We ran into a similar thing and if I recall correctly a mmchconfig
> > --release=LATEST was required to get LROC working which, of course,
> > would boot your 3.5.0.7 client from the cluster.
> >
> > -Aaron
> >
> > On 12/28/16 11:44 AM, Matt Weil wrote:
> >> This is enabled on this node but mmdiag it does not seem to show it
> >> caching.   Did I miss something?  I do have one file system in the
> >> cluster that is running 3.5.0.7 wondering if that is causing this.
> >>> [root at ces1 ~]# mmdiag --lroc
> >>>
> >>> === mmdiag: lroc ===
> >>> LROC Device(s): 'NULL' status Idle
> >>> Cache inodes 1 dirs 1 data 1  Config: maxFile 1073741824 stubFile
> >>> 1073741824
> >>> Max capacity: 0 MB, currently in use: 0 MB
> >>> Statistics from: Tue Dec 27 11:21:14 2016
> >>>
> >>> Total objects stored 0 (0 MB) recalled 0 (0 MB)
> >>>       objects failed to store 0 failed to recall 0 failed to inval 0
> >>>       objects queried 0 (0 MB) not found 0 = 0.00 %
> >>>       objects invalidated 0 (0 MB)
> >>
> >> _______________________________________________
> >> gpfsug-discuss mailing list
> >> gpfsug-discuss at spectrumscale.org
> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >>
> >
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161228/f9837ad7/attachment-0002.htm>


More information about the gpfsug-discuss mailing list