<font size=2 face="sans-serif">If you haven't already, measure the time
directly on the CES node command line skipping Windows and Samba overheads:</font><br><br><font size=2 face="sans-serif">time ls -l /path </font><br><br><font size=2 face="sans-serif">or </font><br><br><font size=2 face="sans-serif">time ls -lR /path<br></font><br><font size=2 face="sans-serif">Depending which you're interested in.</font><br><br><img align=left src=cid:_1_0EAA5DDC0EAA5B70006836AF8525811B alt="Marc A Kaplan" style="border:0px solid;"><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">"Sven Oehme"
<oehmes@us.ibm.com></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">gpfsug main discussion
list <gpfsug-discuss@spectrumscale.org></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">05/09/2017 01:01 PM</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Re: [gpfsug-discuss]
CES and Directory list populating very slowly</font><br><font size=1 color=#5f5f5f face="sans-serif">Sent by:
</font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr noshade><br><br><br><font size=2>ESS nodes have cache, but what matters most for this type
of workloads is to have a very large metadata cache, this resides on the
CES node for SMB/NFS workloads. so if you know that your client will use
this 300k directory a lot you want to have a very large maxfilestocache
setting on this nodes. alternative solution is to install a LROC device
and configure a larger statcache, this helps especially if you have multiple
larger directories and want to cache as many as possible from all of them.<br>make sure you have enough tokenmanager and memory on them if you have multiple
CES nodes and they all will have high settings. </font><font size=3><br></font><font size=2><br>sven</font><font size=3><br></font><font size=2><br>------------------------------------------<br>Sven Oehme <br>Scalable Storage Research <br>email: oehmes@us.ibm.com <br>Phone: +1 (408) 824-8904 <br>IBM Almaden Research Lab <br>------------------------------------------</font><font size=3><br><br></font><img src=cid:_1_0E5ECA680E5EC39C006836AF8525811B alt="Inactive hide details for Mark Bush ---05/09/2017 05:25:39 PM---I have a customer who is struggling (they already have a PMR op" style="border:0px solid;"><font size=2 color=#424282>Mark
Bush ---05/09/2017 05:25:39 PM---I have a customer who is struggling (they
already have a PMR open and it’s being actively worked on</font><font size=3><br></font><font size=2 color=#5f5f5f><br>From: </font><font size=2>Mark Bush <Mark.Bush@siriuscom.com></font><font size=2 color=#5f5f5f><br>To: </font><font size=2>gpfsug main discussion list <gpfsug-discuss@spectrumscale.org></font><font size=2 color=#5f5f5f><br>Date: </font><font size=2>05/09/2017 05:25 PM</font><font size=2 color=#5f5f5f><br>Subject: </font><font size=2>[gpfsug-discuss] CES and Directory list populating
very slowly</font><font size=2 color=#5f5f5f><br>Sent by: </font><font size=2>gpfsug-discuss-bounces@spectrumscale.org</font><font size=3><br></font><hr noshade><font size=3><br><br></font><font size=3 face="Calibri"><br>I have a customer who is struggling (they already have a PMR open and it’s
being actively worked on now). I’m simply seeking understanding of potential
places to look. They have an ESS with a few CES nodes in front. Clients
connect via SMB to the CES nodes. One fileset has about 300k smallish files
in it and when the client opens a windows browser it takes around 30mins
to finish populating the files in this SMB share. </font><font size=3><br></font><font size=3 face="Calibri"><br>Here’s where my confusion is. When a client connects to a CES node this
is all the job of the CES and it’s protocol services to handle, so in
this case CTDB/Samba. <br>But the flow of this is where maybe I’m a little fuzzy. Obviously the
CES nodes act as clients to the NSD (IO/nodes in ESS land) servers. So,
the data really doesn’t exist on the protocol node but passes things off
to the NSD server for regular IO processing. Does the CES node do some
type of caching? I’ve heard talk of LROC on CES nodes potentially but
I’m curious if all of this is already being stored in the pagepool?</font><font size=3><br></font><font size=3 face="Calibri"><br>What could cause a mostly metadata related simple directory lookup take
what seems to the customer a long time for a couple hundred thousand files?</font><font size=3><br><br></font><font size=3 face="Calibri"><br>Mark</font><p><font size=1>This message (including any attachments) is intended only
for the use of the individual or entity to which it is addressed and may
contain information that is non-public, proprietary, privileged, confidential,
and exempt from disclosure under applicable law. If you are not the intended
recipient, you are hereby notified that any use, dissemination, distribution,
or copying of this communication is strictly prohibited. This message may
be viewed by parties at Sirius Computer Solutions other than those named
in the message header. This message does not contain an official representation
of Sirius Computer Solutions. If you have received this communication in
error, notify Sirius Computer Solutions immediately and (i) destroy this
message if a facsimile or (ii) delete this message immediately if this
is an electronic communication. Thank you. </font><p><a href=http://www.siriuscom.com/><font size=2 color=#0082bf face="Calibri"><b><u>Sirius
Computer Solutions</u></b></font></a><font size=3> </font><tt><font size=2>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org</font></tt><tt><font size=2 color=blue><u><br></u></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><p><tt><font size=2>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br></font></tt><p><p><BR>