[gpfsug-discuss] Spectrum Scale - how to get RPO=0

Alec.Effrat at wellsfargo.com Alec.Effrat at wellsfargo.com
Mon May 24 18:23:38 BST 2021


In the past we’d used EMC RecoverPoint for a Spectrum Scale file system.  It isn’t officially supported, and it doesn’t provide for an online replica, but it can keep the I/O synchronous and DVR style file system playback and we had written our own integrations for it via it’s SSH interface.  It is a very competent solution and does pair well with GPFS Spectrum Scale.

Our Spectrum Scale interface basically had to do the following:

1)      Ship updates to the mmdrfs file to the target cluster.

2)      In control failover we would tag the image once GPFS was stopped

3)      Come up on the tagged image on BCP

4)      Had an awk script that would strip out all the host details from the mmdrfs file and update it to 1 host in the BCP cluster.

5)      Import the GPFS file system using the modified mmdrfs file.

Just one way to skin the cat for consideration.  Can provide more actuals on the replication scripts if interested.

Alec Effrat
SAS Lead, AVP
Business Intelligence Competency Center
SAS Administration
Cell 949-246-7713
alec.effrat at wellsfargo.com<mailto:alec.effrat at wellsfargo.com>




From: gpfsug-discuss-bounces at spectrumscale.org <gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Tomasz Rachobinski
Sent: Monday, May 24, 2021 6:06 AM
To: gpfsug-discuss at spectrumscale.org
Subject: [gpfsug-discuss] Spectrum Scale - how to get RPO=0

Hello everyone,
we are trying to implement a mixed linux/windows environment and we have one thing at the top - is there any global method to avoid asynchronous I/O and write everything in synchronous mode?
Another thing is - if there is no global sync setting, how to enforce sync i/o from linux/windows client?

Greetings
Tom Rachobinski
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210524/75bb9936/attachment-0002.htm>


More information about the gpfsug-discuss mailing list