<div dir="ltr"><div>Placement policy only applies to writes and I had thought that gpfs did enough writing to memory "pagepool" to figure out the size before committing the write to pool.<br><br></div>I also admit I don't know all of the innards of gpfs. Pehaps being a copy on write type filesystem prevents this for occurring.<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Oct 31, 2016 at 1:29 PM, Chris Scott <span dir="ltr"><<a href="mailto:chrisjscott@gmail.com" target="_blank">chrisjscott@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Brian<div><br></div><div>This is exactly what I do with a SSD tier on top of 10K and 7.2K tiers.</div><div><br></div><div>HAWC is another recent option that might address Eric's requirement but needs further consideration of the read requirements you want from the small files.</div><div><br></div><div>Cheers</div><span class="HOEnZb"><font color="#888888"><div>Chris</div></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On 31 October 2016 at 17:23, Brian Marshall <span dir="ltr"><<a href="mailto:mimarsh2@vt.edu" target="_blank">mimarsh2@vt.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">When creating a "fast tier" storage pool in a filesystem is the normal case to create a placement policy that places all files in the fast tier and migrates out old and large files?<span class="m_-3521381263447691596HOEnZb"><font color="#888888"><div><br></div><div><br></div><div>Brian Marshall</div></font></span></div><div class="m_-3521381263447691596HOEnZb"><div class="m_-3521381263447691596h5"><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Oct 31, 2016 at 1:20 PM, Jez Tucker <span dir="ltr"><<a href="mailto:jez.tucker@gpfsug.org" target="_blank">jez.tucker@gpfsug.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
Hey Bryan<br>
<br>
 There was a previous RFE for path placement from the UG, but Yuri
told me this was not techically possible as an inode has no
knowledge about the parent dentry. (IIRC).   You can see this in
effect in the C API. It is possible to work this out at kernel
level, but it's so costly that it becomes non-viable at scale /
performance.<br>
<br>
IBMers please chip in and expand if you will.<br>
<br>
Jez<div><div class="m_-3521381263447691596m_1789160400644350486h5"><br>
<br>
<div class="m_-3521381263447691596m_1789160400644350486m_-5822192827025631491moz-cite-prefix">On 31/10/16 17:09, Bryan Banister
wrote:<br>
</div>
</div></div><blockquote type="cite"><div><div class="m_-3521381263447691596m_1789160400644350486h5">
<div class="m_-3521381263447691596m_1789160400644350486m_-5822192827025631491WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">The
File Placement Policy that you are trying to set cannot use
the size of the file to determine the placement of the file
in a GPFS Storage Pool. This is because GPFS has no idea
what the file size will be when the file is open()’d for
writing.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u>Â <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">Hope
that helps!<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">-Bryan<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u>Â <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">PS.
I really wish that we could use a path for specifying data
placement in a GPFS Pool, and not just the file name, owner,
etc. I’ll submit a RFE for this.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u>Â <u></u></span></p>
<p class="MsoNormal"><b><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">From:</span></b><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">
<a class="m_-3521381263447691596m_1789160400644350486m_-5822192827025631491moz-txt-link-abbreviated" href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">gpfsug-discuss-bounces@spectru<wbr>mscale.org</a>
[<a class="m_-3521381263447691596m_1789160400644350486m_-5822192827025631491moz-txt-link-freetext" href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">mailto:gpfsug-discuss-bounces<wbr>@spectrumscale.org</a>]
<b>On Behalf Of </b>J. Eric Wonderley<br>
<b>Sent:</b> Monday, October 31, 2016 11:53 AM<br>
<b>To:</b> gpfsug main discussion list<br>
<b>Subject:</b> [gpfsug-discuss] wanted...gpfs policy that
places larger files onto a pool based on size<u></u><u></u></span></p>
<p class="MsoNormal"><u></u>Â <u></u></p>
<div>
<p class="MsoNormal">I wanted to do something like this...<u></u><u></u></p>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><br>
[root@cl001 ~]# cat /opt/gpfs/home.ply<br>
/*Failsafe migration of old small files back to spinning
media pool(fc_8T) */<br>
RULE 'theshold' MIGRATE FROM POOL 'system'
THRESHOLD(90,70) WEIGHT(ACCESS_TIME) TO POOL 'fc_8T'<br>
/*Write files larger than 16MB to pool called "fc_8T" */<br>
RULE 'bigfiles' SET POOL 'fc_8T' WHERE
FILE_SIZE>16777216<br>
/*Move anything else to system pool */<br>
RULE 'default' SET POOL 'system'<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt">Apparently
there is no happiness using FILE_SIZE in a placement
policy:<br>
[root@cl001 ~]# mmchpolicy home /opt/gpfs/home.ply<br>
Error while validating policy `home.ply': rc=22:<br>
PCSQLERR: 'FILE_SIZE' is an unsupported or unknown
attribute or variable name in this context.<br>
PCSQLCTX: at line 4 of 6: RULE 'bigfiles' SET POOL 'fc_8T'
WHERE {{{FILE_SIZE}}}>16777216<br>
runRemoteCommand_v2: cl002.cl.arc.internal: tschpolicy
/dev/home /var/mmfs/tmp/tspolicyFile.mmc<wbr>hpolicy.113372 -t
home.ply  failed.<br>
mmchpolicy: Command failed. Examine previous error
messages to determine cause.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">Can anyone suggest a way to accomplish
this using policy?<u></u><u></u></p>
</div>
</div>
</div>
<br>
<hr>
<font size="1" color="Gray" face="Arial"><br>
Note: This email is for the confidential use of the named
addressee(s) only and may contain proprietary, confidential or
privileged information. If you are not the intended recipient,
you are hereby notified that any review, dissemination or
copying of this email is strictly prohibited, and to please
notify the sender immediately and destroy this email and any
attachments. Email transmission cannot be guaranteed to be
secure or error-free. The Company, therefore, does not make any
guarantees as to the completeness or accuracy of this email or
any attachments. This email is for informational purposes only
and does not constitute a recommendation, offer, request or
solicitation of any kind to buy, sell, subscribe, redeem or
perform any type of transaction of a financial product.<br>
</font>
<br>
<fieldset class="m_-3521381263447691596m_1789160400644350486m_-5822192827025631491mimeAttachmentHeader"></fieldset>
<br>
</div></div><pre>______________________________<wbr>_________________
gpfsug-discuss mailing list
gpfsug-discuss at <a href="http://spectrumscale.org" target="_blank">spectrumscale.org</a>
<a class="m_-3521381263447691596m_1789160400644350486m_-5822192827025631491moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target="_blank">http://gpfsug.org/mailman/list<wbr>info/gpfsug-discuss</a>
</pre>
</blockquote>
</div>
<br>______________________________<wbr>_________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/list<wbr>info/gpfsug-discuss</a><br>
<br></blockquote></div><br></div>
</div></div><br>______________________________<wbr>_________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/list<wbr>info/gpfsug-discuss</a><br>
<br></blockquote></div><br></div>
</div></div><br>______________________________<wbr>_________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/<wbr>listinfo/gpfsug-discuss</a><br>
<br></blockquote></div><br></div>