<html><body bgcolor="#FFFFFF"><p><font size="2">ESS nodes have cache, but what matters most for this type of workloads is to have a very large metadata cache, this resides on the CES node for SMB/NFS workloads. so if you know that your client will use this 300k directory a lot you want to have a very large maxfilestocache setting on this nodes. alternative solution is to install a LROC device and configure a larger statcache, this helps especially if you have multiple larger directories and want to cache as many as possible from all of them.</font><br><font size="2">make sure you have enough tokenmanager and memory on them if you have multiple CES nodes and they all will have high settings. </font><br><br><font size="2">sven</font><br><br><font size="2">------------------------------------------<br>Sven Oehme <br>Scalable Storage Research <br>email: oehmes@us.ibm.com <br>Phone: +1 (408) 824-8904 <br>IBM Almaden Research Lab <br>------------------------------------------</font><br><br><img width="16" height="16" src="cid:1__=07BB0B88DFCF71708f9e8a93df938690918c07B@" border="0" alt="Inactive hide details for Mark Bush ---05/09/2017 05:25:39 PM---I have a customer who is struggling (they already have a PMR op"><font size="2" color="#424282">Mark Bush ---05/09/2017 05:25:39 PM---I have a customer who is struggling (they already have a PMR open and it’s being actively worked on</font><br><br><font size="2" color="#5F5F5F">From: </font><font size="2">Mark Bush <Mark.Bush@siriuscom.com></font><br><font size="2" color="#5F5F5F">To: </font><font size="2">gpfsug main discussion list <gpfsug-discuss@spectrumscale.org></font><br><font size="2" color="#5F5F5F">Date: </font><font size="2">05/09/2017 05:25 PM</font><br><font size="2" color="#5F5F5F">Subject: </font><font size="2">[gpfsug-discuss] CES and Directory list populating very slowly</font><br><font size="2" color="#5F5F5F">Sent by: </font><font size="2">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr width="100%" size="2" align="left" noshade style="color:#8091A5; "><br><br><br><font face="Calibri">I have a customer who is struggling (they already have a PMR open and it’s being actively worked on now). I’m simply seeking understanding of potential places to look. They have an ESS with a few CES nodes in front. Clients connect via SMB to the CES nodes. One fileset has about 300k smallish files in it and when the client opens a windows browser it takes around 30mins to finish populating the files in this SMB share. </font><br><font face="Calibri"> </font><br><font face="Calibri">Here’s where my confusion is. When a client connects to a CES node this is all the job of the CES and it’s protocol services to handle, so in this case CTDB/Samba. </font><br><font face="Calibri">But the flow of this is where maybe I’m a little fuzzy. Obviously the CES nodes act as clients to the NSD (IO/nodes in ESS land) servers. So, the data really doesn’t exist on the protocol node but passes things off to the NSD server for regular IO processing. Does the CES node do some type of caching? I’ve heard talk of LROC on CES nodes potentially but I’m curious if all of this is already being stored in the pagepool?</font><br><font face="Calibri"> </font><br><font face="Calibri">What could cause a mostly metadata related simple directory lookup take what seems to the customer a long time for a couple hundred thousand files?</font><br><font face="Calibri"> </font><br><font face="Calibri"> </font><br><font face="Calibri">Mark</font><p><font size="1" face="Times Roman">This message (including any attachments) is intended only for the use of the individual or entity to which it is addressed and may contain information that is non-public, proprietary, privileged, confidential, and exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. This message may be viewed by parties at Sirius Computer Solutions other than those named in the message header. This message does not contain an official representation of Sirius Computer Solutions. If you have received this communication in error, notify Sirius Computer Solutions immediately and (i) destroy this message if a facsimile or (ii) delete this message immediately if this is an electronic communication. Thank you. </font><p><a href="http://www.siriuscom.com/"><b><u><font size="2" color="#0563C1" face="Calibri">Sirius Computer Solutions</font></u></b></a> <tt><font size="2">_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><tt><font size="2"><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></font></tt><tt><font size="2"><br></font></tt><p><p><BR>
</body></html>