<font size=2 face="sans-serif">You can leave out the WHERE ... AND
POOL_NAME LIKE 'deep' - that is redundant with the FROM POOL 'deep' clause.</font><br><br><font size=2 face="sans-serif">In fact at a slight additional overhead
in mmapplypolicy processing due to begin checked a little later in the
game, you can leave out MISC_ATTRIBUTES NOT LIKE '%2%' since</font><br><font size=2 face="sans-serif">the code is smart enough to not operate
on files already marked as replicate(2).</font><br><br><font size=2 face="sans-serif">I believe mmapplypolicy .... -I yes
means do any necessary data movement and/or replication "now"</font><br><br><font size=2 face="sans-serif">Alternatively you can say -I
defer, which will leave the files "ill-replicated" and
then fix them up with mmrestripefs later.</font><br><br><font size=2 face="sans-serif">The -I yes vs -I defer choice is the
same as for mmchattr. Think of mmapplypolicy as a fast, parallel
way to do</font><br><br><font size=2 face="sans-serif">find ... | xargs mmchattr ... </font><br><br><font size=2 face="sans-serif">Advert: see also samples/ilm/mmfind
-- the latest version should have an -xargs option </font><br><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">Jan-Frode Myklebust
<janfrode@tanso.net></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">gpfsug main discussion
list <gpfsug-discuss@spectrumscale.org></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">08/31/2016 04:44 PM</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Re: [gpfsug-discuss]
Data Replication</font><br><font size=1 color=#5f5f5f face="sans-serif">Sent by:
</font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr noshade><br><br><br><font size=3>Assuming your DeepFlash pool is named "deep",
something like the following should work:<br><br>RULE 'deepreplicate'<br> migrate from pool 'deep' to pool 'deep'<br> replicate(2)<br> where MISC_ATTRIBUTES NOT LIKE '%2%' and POOL_NAME LIKE
'deep'<br><br><br>"mmapplypolicy gpfs0 -P replicate-policy.pol -I yes"</font><br><font size=3><br>and possibly "mmrestripefs gpfs0 -r" afterwards.</font><br><font size=3><br></font><br><font size=3> -jf</font><br><br><br><font size=3>On Wed, Aug 31, 2016 at 8:01 PM, Brian Marshall <</font><a href=mailto:mimarsh2@vt.edu target=_blank><font size=3 color=blue><u>mimarsh2@vt.edu</u></font></a><font size=3>>
wrote:</font><br><font size=3>Daniel,</font><br><br><font size=3>So here's my use case: I have a Sandisk IF150 (branded
as DeepFlash recently) with 128TB of flash acting as a "fast tier"
storage pool in our HPC scratch file system. Can I set the filesystem
replication level to 1 then write a policy engine rule to send small and/or
recent files to the IF150 with a replication of 2? </font><br><br><font size=3>Any other comments on the proposed usage strategy are
helpful.</font><br><br><font size=3>Thank you,</font><br><font size=3>Brian Marshall</font><br><br><font size=3>On Wed, Aug 31, 2016 at 10:32 AM, Daniel Kidger <</font><a href=mailto:daniel.kidger@uk.ibm.com target=_blank><font size=3 color=blue><u>daniel.kidger@uk.ibm.com</u></font></a><font size=3>>
wrote:</font><br><font size=3>The other 'Exception' is when a rule is used to convert
a 1 way replicated file to 2 way, or when only one failure group is up
due to HW problems. It that case the (re-replication) is done by whatever
nodes are used for the rule or command-line, which may include an NSD server.<br><br>Daniel<br><br>IBM Spectrum Storage Software</font><font size=3 color=blue><u><br></u></font><a href="tel:+44%207818%20522266" target=_blank><font size=3 color=blue><u>+44
(0)7818 522266</u></font></a><font size=3><br>Sent from my iPad using IBM Verse<br><br><br></font><hr><font size=3>On 30 Aug 2016, 19:53:31, </font><a href=mailto:mimarsh2@vt.edu target=_blank><font size=3 color=blue><u>mimarsh2@vt.edu</u></font></a><font size=3>wrote:<br><br>From: </font><a href=mailto:mimarsh2@vt.edu target=_blank><font size=3 color=blue><u>mimarsh2@vt.edu</u></font></a><font size=3><br>To: </font><a href="mailto:gpfsug-discuss@spectrumscale.org" target=_blank><font size=3 color=blue><u>gpfsug-discuss@spectrumscale.org</u></font></a><font size=3><br>Cc: <br>Date: 30 Aug 2016 19:53:31<br>Subject: Re: [gpfsug-discuss] Data Replication</font><br><font size=3><br></font><br><font size=3>Thanks. This confirms the numbers that I am seeing.</font><br><br><font size=3>Brian</font><br><br><font size=3>On Tue, Aug 30, 2016 at 2:50 PM, Laurence Horrocks-Barlow
<</font><a href=mailto:laurence@qsplace.co.uk target=_blank><font size=3 color=blue><u>laurence@qsplace.co.uk</u></font></a><font size=3>>
wrote:</font><br><font size=3>Its the client that does all the synchronous replication,
this way the cluster is able to scale as the clients do the leg work (so
to speak).<br><br>The somewhat "exception" is if a GPFS NSD server (or client with
direct NSD) access uses a server bases protocol such as SMB, in this case
the SMB server will do the replication as the SMB client doesn't know about
GPFS or its replication; essentially the SMB server is the GPFS client.<br><br>-- Lauz<br></font><br><font size=3>On 30 August 2016 17:03:38 CEST, Bryan Banister <</font><a href=mailto:bbanister@jumptrading.com target=_blank><font size=3 color=blue><u>bbanister@jumptrading.com</u></font></a><font size=3>>
wrote:</font><br><font size=2 color=#004080 face="Calibri">The NSD Client handles the
replication and will, as you stated, write one copy to one NSD (using the
primary server for this NSD) and one to a different NSD in a different
GPFS failure group (using quite likely, but not necessarily, a different
NSD server that is the primary server for this alternate NSD).</font><p><font size=2 color=#004080 face="Calibri">Cheers,</font><p><font size=2 color=#004080 face="Calibri">-Bryan</font><p><font size=3> </font><p><font size=2 face="Tahoma"><b>From:</b> </font><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target=_blank><font size=2 color=blue face="Tahoma"><u>gpfsug-discuss-bounces@spectrumscale.org</u></font></a><font size=2 face="Tahoma">[mailto:</font><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target=_blank><font size=2 color=blue face="Tahoma"><u>gpfsug-discuss-bounces@spectrumscale.org</u></font></a><font size=2 face="Tahoma">]
<b>On Behalf Of </b>Brian Marshall<b><br>Sent:</b> Tuesday, August 30, 2016 9:59 AM<b><br>To:</b> gpfsug main discussion list<b><br>Subject:</b> [gpfsug-discuss] Data Replication</font><p><font size=3> </font><p><font size=3>All,</font><p><font size=3> </font><p><font size=3>If I setup a filesystem to have data replication of 2 (2
copies of data), does the data get replicated at the NSD Server or at the
client? i.e. Does the client send 2 copies over the network or does
the NSD Server get a single copy and then replicate on storage NSDs?</font><p><font size=3> </font><p><font size=3>I couldn't find a place in the docs that talked about this
specific point.</font><p><font size=3> </font><p><font size=3>Thank you,</font><p><font size=3>Brian Marshall</font><br><font size=3><br></font><hr><font size=1 color=#808080 face="Arial"><br>Note: This email is for the confidential use of the named addressee(s)
only and may contain proprietary, confidential or privileged information.
If you are not the intended recipient, you are hereby notified that any
review, dissemination or copying of this email is strictly prohibited,
and to please notify the sender immediately and destroy this email and
any attachments. Email transmission cannot be guaranteed to be secure or
error-free. The Company, therefore, does not make any guarantees as to
the completeness or accuracy of this email or any attachments. This email
is for informational purposes only and does not constitute a recommendation,
offer, request or solicitation of any kind to buy, sell, subscribe, redeem
or perform any type of transaction of a financial product.</font><p><hr><tt><font size=3><br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font></tt><a href=http://spectrumscale.org/ target=_blank><tt><font size=3 color=blue><u>spectrumscale.org</u></font></tt></a><tt><font size=3 color=blue><u><br></u></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target=_blank><tt><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><br><font size=3 color=#8f8f8f><br>-- <br>Sent from my Android device with K-9 Mail. Please excuse my brevity.</font><br><font size=3><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font><a href=http://spectrumscale.org/ target=_blank><font size=3 color=blue><u>spectrumscale.org</u></font></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target=_blank><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><font size=3><br></font><br><br><font size=3>Unless stated otherwise above:<br>IBM United Kingdom Limited - Registered in England and Wales with number
741598. <br>Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
3AU<br></font><br><font size=3><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font><a href=http://spectrumscale.org/ target=_blank><font size=3 color=blue><u>spectrumscale.org</u></font></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target=_blank><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><font size=3><br></font><br><br><font size=3><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font><a href=http://spectrumscale.org/ target=_blank><font size=3 color=blue><u>spectrumscale.org</u></font></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target=_blank><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><font size=3><br></font><br><tt><font size=2>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br></font></tt><br><BR>