[gpfsug-discuss] Data Replication

Jan-Frode Myklebust janfrode at tanso.net
Wed Aug 31 21:44:04 BST 2016


Assuming your DeepFlash pool is named "deep", something like the following
should work:

RULE 'deepreplicate'
    migrate from pool 'deep' to pool 'deep'
    replicate(2)
    where MISC_ATTRIBUTES NOT LIKE '%2%' and POOL_NAME LIKE 'deep'


"mmapplypolicy gpfs0 -P replicate-policy.pol  -I yes"

and possibly "mmrestripefs gpfs0 -r" afterwards.


  -jf


On Wed, Aug 31, 2016 at 8:01 PM, Brian Marshall <mimarsh2 at vt.edu> wrote:

> Daniel,
>
> So here's my use case:  I have a Sandisk IF150 (branded as DeepFlash
> recently) with 128TB of flash acting as a "fast tier" storage pool in our
> HPC scratch file system.  Can I set the filesystem replication level to 1
> then write a policy engine rule to send small and/or recent files to the
> IF150 with a replication of 2?
>
> Any other comments on the proposed usage strategy are helpful.
>
> Thank you,
> Brian Marshall
>
> On Wed, Aug 31, 2016 at 10:32 AM, Daniel Kidger <daniel.kidger at uk.ibm.com>
> wrote:
>
>> The other 'Exception' is when a rule is used to convert a 1 way
>> replicated file to 2 way, or when only one failure group is up due to HW
>> problems. It that case the (re-replication) is done by whatever nodes are
>> used for the rule or command-line, which may include an NSD server.
>>
>> Daniel
>>
>> IBM Spectrum Storage Software
>> +44 (0)7818 522266 <+44%207818%20522266>
>> Sent from my iPad using IBM Verse
>>
>>
>> ------------------------------
>> On 30 Aug 2016, 19:53:31, mimarsh2 at vt.edu wrote:
>>
>> From: mimarsh2 at vt.edu
>> To: gpfsug-discuss at spectrumscale.org
>> Cc:
>> Date: 30 Aug 2016 19:53:31
>> Subject: Re: [gpfsug-discuss] Data Replication
>>
>>
>> Thanks.   This confirms the numbers that I am seeing.
>>
>> Brian
>>
>> On Tue, Aug 30, 2016 at 2:50 PM, Laurence Horrocks-Barlow <
>> laurence at qsplace.co.uk> wrote:
>>
>>> Its the client that does all the synchronous replication, this way the
>>> cluster is able to scale as the clients do the leg work (so to speak).
>>>
>>> The somewhat "exception" is if a GPFS NSD server (or client with direct
>>> NSD) access uses a server bases protocol such as SMB, in this case the SMB
>>> server will do the replication as the SMB client doesn't know about GPFS or
>>> its replication; essentially the SMB server is the GPFS client.
>>>
>>> -- Lauz
>>>
>>> On 30 August 2016 17:03:38 CEST, Bryan Banister <
>>> bbanister at jumptrading.com> wrote:
>>>
>>>> The NSD Client handles the replication and will, as you stated, write
>>>> one copy to one NSD (using the primary server for this NSD) and one to a
>>>> different NSD in a different GPFS failure group (using quite likely, but
>>>> not necessarily, a different NSD server that is the primary server for this
>>>> alternate NSD).
>>>>
>>>> Cheers,
>>>>
>>>> -Bryan
>>>>
>>>>
>>>>
>>>> *From:* gpfsug-discuss-bounces at spectrumscale.org [mailto:
>>>> gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of *Brian Marshall
>>>> *Sent:* Tuesday, August 30, 2016 9:59 AM
>>>> *To:* gpfsug main discussion list
>>>> *Subject:* [gpfsug-discuss] Data Replication
>>>>
>>>>
>>>>
>>>> All,
>>>>
>>>>
>>>>
>>>> If I setup a filesystem to have data replication of 2 (2 copies of
>>>> data), does the data get replicated at the NSD Server or at the client?
>>>>  i.e. Does the client send 2 copies over the network or does the NSD Server
>>>> get a single copy and then replicate on storage NSDs?
>>>>
>>>>
>>>>
>>>> I couldn't find a place in the docs that talked about this specific
>>>> point.
>>>>
>>>>
>>>>
>>>> Thank you,
>>>>
>>>> Brian Marshall
>>>>
>>>>
>>>> ------------------------------
>>>>
>>>> Note: This email is for the confidential use of the named addressee(s)
>>>> only and may contain proprietary, confidential or privileged information.
>>>> If you are not the intended recipient, you are hereby notified that any
>>>> review, dissemination or copying of this email is strictly prohibited, and
>>>> to please notify the sender immediately and destroy this email and any
>>>> attachments. Email transmission cannot be guaranteed to be secure or
>>>> error-free. The Company, therefore, does not make any guarantees as to the
>>>> completeness or accuracy of this email or any attachments. This email is
>>>> for informational purposes only and does not constitute a recommendation,
>>>> offer, request or solicitation of any kind to buy, sell, subscribe, redeem
>>>> or perform any type of transaction of a financial product.
>>>>
>>>> ------------------------------
>>>>
>>>> gpfsug-discuss mailing list
>>>> gpfsug-discuss at spectrumscale.org
>>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>>>
>>>>
>>> --
>>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>>
>>> _______________________________________________
>>> gpfsug-discuss mailing list
>>> gpfsug-discuss at spectrumscale.org
>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>>
>>>
>> Unless stated otherwise above:
>> IBM United Kingdom Limited - Registered in England and Wales with number
>> 741598.
>> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>>
>>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160831/5205486b/attachment-0002.htm>


More information about the gpfsug-discuss mailing list