[gpfsug-discuss] NSD access routes

Dave Goodbourn dave at milk-vfx.com
Mon Jun 5 15:15:00 BST 2017


Ha! A quick shrink of the pagepool and we're in action! Thanks all.

Dave.

----------------------------------------------------
*Dave Goodbourn*
Head of Systems
*MILK <http://www.milk-vfx.com/> VISUAL EFFECTS*

5th floor, Threeways House,
40-44 Clipstone Street London, W1W 5DW
Tel: *+44 (0)20 3697 8448*
Mob: *+44 (0)7917 411 069*

On 5 June 2017 at 15:03, Sven Oehme <oehmes at gmail.com> wrote:

> yes as long as you haven't pushed anything to it (means pagepool got under
> enough pressure to free up space) you won't see anything in the stats :-)
>
> sven
>
>
> On Mon, Jun 5, 2017 at 7:00 AM Dave Goodbourn <dave at milk-vfx.com> wrote:
>
>> OK I'm going to hang my head in the corner...RTFM...I've not filled the
>> memory buffer pool yet so I doubt it will have anything in it yet!! :(
>>
>> ----------------------------------------------------
>> *Dave Goodbourn*
>> Head of Systems
>> *MILK <http://www.milk-vfx.com/> VISUAL EFFECTS*
>>
>> 5th floor, Threeways House,
>> 40-44 Clipstone Street London, W1W 5DW
>> Tel: *+44 (0)20 3697 8448*
>> Mob: *+44 (0)7917 411 069*
>>
>> On 5 June 2017 at 14:55, Dave Goodbourn <dave at milk-vfx.com> wrote:
>>
>>> OK slightly ignore that last email. It's still not updating the output
>>> but I realise the Stats from line is when they started so probably won't
>>> update! :(
>>>
>>> Still nothing seems to being cached though.
>>>
>>> ----------------------------------------------------
>>> *Dave Goodbourn*
>>> Head of Systems
>>> *MILK <http://www.milk-vfx.com/> VISUAL EFFECTS*
>>>
>>> 5th floor, Threeways House,
>>> 40-44 Clipstone Street London, W1W 5DW
>>> Tel: *+44 (0)20 3697 8448*
>>> Mob: *+44 (0)7917 411 069*
>>>
>>> On 5 June 2017 at 14:49, Dave Goodbourn <dave at milk-vfx.com> wrote:
>>>
>>>> Thanks Bob,
>>>>
>>>> That pagepool comment has just answered my next question!
>>>>
>>>> But it doesn't seem to be working. Here's my mmdiag output:
>>>>
>>>> === mmdiag: lroc ===
>>>> LROC Device(s): '0AF0000259355BA8#/dev/sdb;0AF0000259355BA9#/dev/sdc;0AF0000259355BAA#/dev/sdd;'
>>>> status Running
>>>> Cache inodes 1 dirs 1 data 1  Config: maxFile 0 stubFile 0
>>>> Max capacity: 1151997 MB, currently in use: 0 MB
>>>> Statistics from: Mon Jun  5 13:40:50 2017
>>>>
>>>> Total objects stored 0 (0 MB) recalled 0 (0 MB)
>>>>       objects failed to store 0 failed to recall 0 failed to inval 0
>>>>       objects queried 0 (0 MB) not found 0 = 0.00 %
>>>>       objects invalidated 0 (0 MB)
>>>>
>>>>       Inode objects stored 0 (0 MB) recalled 0 (0 MB) = 0.00 %
>>>>       Inode objects queried 0 (0 MB) = 0.00 % invalidated 0 (0 MB)
>>>>       Inode objects failed to store 0 failed to recall 0 failed to
>>>> query 0 failed to inval 0
>>>>
>>>>       Directory objects stored 0 (0 MB) recalled 0 (0 MB) = 0.00 %
>>>>       Directory objects queried 0 (0 MB) = 0.00 % invalidated 0 (0 MB)
>>>>       Directory objects failed to store 0 failed to recall 0 failed to
>>>> query 0 failed to inval 0
>>>>
>>>>       Data objects stored 0 (0 MB) recalled 0 (0 MB) = 0.00 %
>>>>       Data objects queried 0 (0 MB) = 0.00 % invalidated 0 (0 MB)
>>>>       Data objects failed to store 0 failed to recall 0 failed to query
>>>> 0 failed to inval 0
>>>>
>>>>   agent inserts=0, reads=0
>>>>         response times (usec):
>>>>         insert min/max/avg=0/0/0
>>>>         read   min/max/avg=0/0/0
>>>>
>>>>   ssd   writeIOs=0, writePages=0
>>>>         readIOs=0, readPages=0
>>>>         response times (usec):
>>>>         write  min/max/avg=0/0/0
>>>>         read   min/max/avg=0/0/0
>>>>
>>>>
>>>> I've restarted GPFS on that node just in case but that didn't seem to
>>>> help. I have LROC on a node that DOESN'T have direct access to an NSD so
>>>> will hopefully cache files that get requested over NFS.
>>>>
>>>> How often are these stats updated? The Statistics line doesn't seem to
>>>> update when running the command again.
>>>>
>>>> Dave,
>>>> ----------------------------------------------------
>>>> *Dave Goodbourn*
>>>> Head of Systems
>>>> *MILK <http://www.milk-vfx.com/> VISUAL EFFECTS*
>>>>
>>>> 5th floor, Threeways House,
>>>> 40-44 Clipstone Street London, W1W 5DW
>>>> Tel: *+44 (0)20 3697 8448*
>>>> Mob: *+44 (0)7917 411 069*
>>>>
>>>> On 5 June 2017 at 13:48, Oesterlin, Robert <Robert.Oesterlin at nuance.com
>>>> > wrote:
>>>>
>>>>> Hi Dave
>>>>>
>>>>>
>>>>>
>>>>> I’ve done a large-scale (600 node) LROC deployment here - feel free to
>>>>> reach out if you have questions.
>>>>>
>>>>>
>>>>>
>>>>> mmdiag --lroc is about all there is but it does give you a pretty good
>>>>> idea how the cache is performing but you can’t tell which files are cached.
>>>>> Also, watch out that the LROC cached will steal pagepool memory (1% of the
>>>>> LROC cache size)
>>>>>
>>>>>
>>>>>
>>>>> Bob Oesterlin
>>>>> Sr Principal Storage Engineer, Nuance
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *From: *<gpfsug-discuss-bounces at spectrumscale.org> on behalf of Dave
>>>>> Goodbourn <dave at milk-vfx.com>
>>>>> *Reply-To: *gpfsug main discussion list <gpfsug-discuss at spectrumscale.
>>>>> org>
>>>>> *Date: *Monday, June 5, 2017 at 7:19 AM
>>>>> *To: *gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
>>>>> *Subject: *[EXTERNAL] Re: [gpfsug-discuss] NSD access routes
>>>>>
>>>>>
>>>>>
>>>>> I'm testing out the LROC idea. All seems to be working well, but, is
>>>>> there anyway to monitor what's cached? How full it might be? The
>>>>> performance etc??
>>>>>
>>>>>
>>>>>
>>>>> I can see some stats in mmfsadm dump lroc but that's about it.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170605/270401c6/attachment-0002.htm>


More information about the gpfsug-discuss mailing list