[gpfsug-discuss] Small cluster

Sanchez, Paul Paul.Sanchez at deshaw.com
Fri Mar 4 16:54:39 GMT 2016


You wouldn’t be alone in trying to make the “concurrent CES gateway + NSD server nodes” formula work.  That doesn’t mean it will be well-supported initially, but it does mean that others will be finding bugs and interaction issues along with you.

On GPFS 4.1.1.2 for example, it’s possible to get a CES protocol node into a state where the mmcesmonitor is dead and requires a mmshutdown/mmstartup to recover from.  Since in a shared-nothing disk topology that would require mmchdisk/mmrestripefs to recover and rebalance, it would be operationally intensive to run CES on an NSD server with local disks.  With shared SAN disks, this becomes more tractable, in my opinion.

Thx
Paul

From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Zachary Giles
Sent: Friday, March 04, 2016 11:37 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Small cluster

SMB too, eh? See this is where it starts to get hard to scale down. You could do a 3 node GPFS cluster with replication at remote sites, pulling in from AFM over the Net. If you want SMB too, you're probably going to need another pair of servers to act as the Protocol Servers on top of the 3 GPFS servers. I think running them all together is not recommended, and probably I'd agree with that.
Though, you could do it anyway. If it's for read-only and updated daily, eh, who cares. Again, depends on your GPFS experience and the balance between production, price, and performance :)

On Fri, Mar 4, 2016 at 11:30 AM, Mark.Bush at siriuscom.com<mailto:Mark.Bush at siriuscom.com> <Mark.Bush at siriuscom.com<mailto:Mark.Bush at siriuscom.com>> wrote:
Yes.  Really the only other option we have (and not a bad one) is getting a v7000 Unified in there (if we can get the price down far enough).  That’s not a bad option since all they really want is SMB shares in the remote.  I just keep thinking a set of servers would do the trick and be cheaper.



From: Zachary Giles <zgiles at gmail.com<mailto:zgiles at gmail.com>>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date: Friday, March 4, 2016 at 10:26 AM

To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Small cluster

You can do FPO for non-Hadoop workloads. It just alters the disks below the GPFS filesystem layer and looks like a normal GPFS system (mostly).  I do think there were some restrictions on non-FPO nodes mounting FPO filesystems via multi-cluster.. not sure if those are still there.. any input on that from IBM?

If small enough data, and with 3-way replication, it might just be wise to do internal storage and 3x rep. A 36TB 2U server is ~$10K (just common throwing out numbers), 3 of those per site would fit in your budget.

Again.. depending on your requirements, stability balance between 'science experiment' vs production, GPFS knowledge level, etc etc...

This is actually an interesting and somewhat missing space for small enterprises. If you just want 10-20TB active-active online everywhere, say, for VMware, or NFS, or something else, there arent all that many good solutions today that scale down far enough and are a decent price. It's easy with many many PB, but small.. idk. I think the above sounds good as anything without going SAN-crazy.



On Fri, Mar 4, 2016 at 11:21 AM, Mark.Bush at siriuscom.com<mailto:Mark.Bush at siriuscom.com> <Mark.Bush at siriuscom.com<mailto:Mark.Bush at siriuscom.com>> wrote:
I guess this is really my question.  Budget is less than $50k per site and they need around 20TB storage.  Two nodes with MD3 or something may work.  But could it work (and be successful) with just servers and internal drives?  Should I do FPO for non hadoop like workloads?  I didn’t think I could get native raid except in the ESS (GSS no longer exists if I remember correctly).  Do I just make replicas and call it good?


Mark

From: <gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>> on behalf of Marc A Kaplan <makaplan at us.ibm.com<mailto:makaplan at us.ibm.com>>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date: Friday, March 4, 2016 at 10:09 AM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Small cluster

Jon, I don't doubt your experience, but it's not quite fair or even sensible to make a decision today based on what was available in the GPFS 2.3 era.

We are now at GPFS 4.2 with support for 3 way replication and FPO.
Also we have Raid controllers, IB, and "Native Raid" and ESS, GSS solutions and more.

So more choices, more options, making finding an "optimal" solution more difficult.

To begin with, as with any provisioning problem, one should try to state: requirements, goals, budgets, constraints, failure/tolerance models/assumptions,
expected workloads, desired performance, etc, etc.


This message (including any attachments) is intended only for the use of the individual or entity to which it is addressed and may contain information that is non-public, proprietary, privileged, confidential, and exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. This message may be viewed by parties at Sirius Computer Solutions other than those named in the message header. This message does not contain an official representation of Sirius Computer Solutions. If you have received this communication in error, notify Sirius Computer Solutions immediately and (i) destroy this message if a facsimile or (ii) delete this message immediately if this is an electronic communication. Thank you.
Sirius Computer Solutions<http://www.siriuscom.com>

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



--
Zach Giles
zgiles at gmail.com<mailto:zgiles at gmail.com>

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



--
Zach Giles
zgiles at gmail.com<mailto:zgiles at gmail.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160304/a40a6130/attachment-0002.htm>


More information about the gpfsug-discuss mailing list