[gpfsug-discuss] NDS in Two Site scenario

Marc A Kaplan makaplan at us.ibm.com
Thu Jul 21 14:33:47 BST 2016


I don't know.  That said, let's be logical and cautious. 

Your network performance has got to be comparable to (preferably better 
than!)  your disk/storage system. 
Think speed, latency, bandwidth, jitter, reliability, security.
For a production system with data you care about, that probably means a 
dedicated/private/reserved channel, probably on private or leased fiber.

Sure you can cobble together a demo, proof-of-concept, or prototype with 
less than that, but are you going to bet your career, life, friendships, 
data on that?

Then you have to work through and test failure and recover scenarios...

This forum would be one place to gather at least some anecdotes from power 
users/admins who might be running GPFS clusters spread over
multiple kilometers... 

Is there a sale or marketing team selling this?  What do they recommend?

Here is an excerpt from an IBM white paper I found by googling...  Notice 
the qualifier "high quality wide area network":

"...Synchronous replication works well for many workloads by replicating 
data across storage arrays within a data center, within a campus or across 
geographical distances using high quality wide area network connections. 
When wide area network connections are not high performance or are not 
reliable, an asynchronous approach to data replication is required. GPFS 
3.5 introduces a feature called Active File Management (AFM). ..."

Of course GPFS has improved (and been renamed!) since 3.5 but 4.2 cannot 
magically compensate for a not-so-high-quality network!




From:   "Mark.Bush at siriuscom.com" <Mark.Bush at siriuscom.com>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   07/20/2016 07:34 PM
Subject:        Re: [gpfsug-discuss] NDS in Two Site scenario
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Marc, what you are saying is anything outside a particular data center 
shouldn’t be part of a cluster?  I’m not sure marketing is in line with 
this then. 
 
From: <gpfsug-discuss-bounces at spectrumscale.org> on behalf of Marc A 
Kaplan <makaplan at us.ibm.com>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: Wednesday, July 20, 2016 at 4:52 PM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] NDS in Two Site scenario
 
Careful! You need to plan and test, test and plan both failure scenarios 
and performance under high network loads.
I don't believe GPFS was designed with the idea of splitting clusters over 
multiple sites. 
If your inter-site network runs fast enough, and you can administer it 
well enough -- perhaps it will work well enough...

Hint: Think about the what the words "cluster" and "site" mean.

GPFS does have the AFM feature, which was designed for multi-site 
deployments.

This message (including any attachments) is intended only for the use of 
the individual or entity to which it is addressed and may contain 
information that is non-public, proprietary, privileged, confidential, and 
exempt from disclosure under applicable law. If you are not the intended 
recipient, you are hereby notified that any use, dissemination, 
distribution, or copying of this communication is strictly prohibited. 
This message may be viewed by parties at Sirius Computer Solutions other 
than those named in the message header. This message does not contain an 
official representation of Sirius Computer Solutions. If you have received 
this communication in error, notify Sirius Computer Solutions immediately 
and (i) destroy this message if a facsimile or (ii) delete this message 
immediately if this is an electronic communication. Thank you. 
Sirius Computer Solutions _______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160721/62ef6999/attachment-0002.htm>


More information about the gpfsug-discuss mailing list