<html><body><p>The V7000 Unified type system is made for this application. <br><br><a href="http://www-03.ibm.com/systems/storage/disk/storwize_v7000/">http://www-03.ibm.com/systems/storage/disk/storwize_v7000/</a><br><br><br>Jeff Ceason<br>Solutions Architect<br>(520) 268-2193 (Mobile)<br>ceason@us.ibm.com<br><br><img width="16" height="16" src="cid:1__=8FBBF5FFDFF711378f9e8a93df938690918c8FB@" border="0" alt="Inactive hide details for gpfsug-discuss-request---03/04/2016 11:15:24 AM---Send gpfsug-discuss mailing list submissions to gp"><font color="#424282">gpfsug-discuss-request---03/04/2016 11:15:24 AM---Send gpfsug-discuss mailing list submissions to gpfsug-discuss@spectrumscale.org</font><br><br><font size="2" color="#5F5F5F">From: </font><font size="2">gpfsug-discuss-request@spectrumscale.org</font><br><font size="2" color="#5F5F5F">To: </font><font size="2">gpfsug-discuss@spectrumscale.org</font><br><font size="2" color="#5F5F5F">Date: </font><font size="2">03/04/2016 11:15 AM</font><br><font size="2" color="#5F5F5F">Subject: </font><font size="2">gpfsug-discuss Digest, Vol 50, Issue 14</font><br><font size="2" color="#5F5F5F">Sent by: </font><font size="2">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr width="100%" size="2" align="left" noshade style="color:#8091A5; "><br><br><br><tt>Send gpfsug-discuss mailing list submissions to<br> gpfsug-discuss@spectrumscale.org<br><br>To subscribe or unsubscribe via the World Wide Web, visit<br> </tt><tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></tt><tt><br>or, via email, send a message with subject or body 'help' to<br> gpfsug-discuss-request@spectrumscale.org<br><br>You can reach the person managing the list at<br> gpfsug-discuss-owner@spectrumscale.org<br><br>When replying, please edit your Subject line so it is more specific<br>than "Re: Contents of gpfsug-discuss digest..."<br><br><br>Today's Topics:<br><br> 1. Re: Small cluster (Sven Oehme)<br><br><br>----------------------------------------------------------------------<br><br>Message: 1<br>Date: Fri, 4 Mar 2016 19:03:16 +0100<br>From: "Sven Oehme" <oehmes@us.ibm.com><br>To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org><br>Subject: Re: [gpfsug-discuss] Small cluster<br>Message-ID: <201603041804.u24I4g2R026689@d03av01.boulder.ibm.com><br>Content-Type: text/plain; charset="utf-8"<br><br>Hi,<br><br>a couple of comments to the various infos in this thread.<br><br>1. the need to run CES on separate nodes is a recommendation, not a<br>requirement and the recommendation comes from the fact that if you have<br>heavy loaded NAS traffic that gets the system to its knees, you can take<br>your NSD service down with you if its on the same box. so as long as you<br>have a reasonable performance expectation and size the system correct there<br>is no issue.<br><br>2. shared vs FPO vs shared nothing (just replication) . the main issue<br>people overlook in this scenario is the absence of read/write caches in FPO<br>or shared nothing configurations. every physical disk drive can only do<br>~100 iops and thats independent if the io size is 1 byte or 1 megabyte its<br>pretty much the same effort. particular on metadata this bites you really<br>badly as every of this tiny i/os eats one of your 100 iops a disk can do<br>and quickly you used up all your iops on the drives. if you have any form<br>of raid controller (sw or hw) it typically implements at minimum a read<br>cache on most systems a read/write cache which will significant increase<br>the number of logical i/os one can do against a disk , my best example is<br>always if you have a workload that does 4k seq DIO writes to a single disk,<br>if you have no raid controller you can do 400k/sec in this workload if you<br>have a reasonable ok write cache in front of the cache you can do 50 times<br>that much. so especilly if you use snapshots, CES services or anything<br>thats metadata intensive you want some type of raid protection with<br>caching. btw. replication in the FS makes this even worse as now each write<br>turns into 3 iops for the data + additional iops for the log records so you<br>eat up your iops very quick .<br><br>3. instead of shared SAN a shared SAS device is significantly cheaper but<br>only scales to 2-4 nodes , the benefit is you only need 2 instead of 3<br>nodes as you can use the disks as tiebreaker disks. if you also add some<br>SSD's for the metadata and make use of HAWC and LROC you might get away<br>from not needing a raid controller with cache as HAWC will solve that issue<br>for you .<br><br>just a few thoughts :-D<br><br>sven<br><br><br>------------------------------------------<br>Sven Oehme<br>Scalable Storage Research<br>email: oehmes@us.ibm.com<br>Phone: +1 (408) 824-8904<br>IBM Almaden Research Lab<br>------------------------------------------<br><br><br><br>From: Zachary Giles <zgiles@gmail.com><br>To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org><br>Date: 03/04/2016 05:36 PM<br>Subject: Re: [gpfsug-discuss] Small cluster<br>Sent by: gpfsug-discuss-bounces@spectrumscale.org<br><br><br><br>SMB too, eh? See this is where it starts to get hard to scale down. You<br>could do a 3 node GPFS cluster with replication at remote sites, pulling in<br>from AFM over the Net. If you want SMB too, you're probably going to need<br>another pair of servers to act as the Protocol Servers on top of the 3 GPFS<br>servers. I think running them all together is not recommended, and probably<br>I'd agree with that.<br>Though, you could do it anyway. If it's for read-only and updated daily,<br>eh, who cares. Again, depends on your GPFS experience and the balance<br>between production, price, and performance :)<br><br>On Fri, Mar 4, 2016 at 11:30 AM, Mark.Bush@siriuscom.com <<br>Mark.Bush@siriuscom.com> wrote:<br> Yes.? Really the only other option we have (and not a bad one) is getting<br> a v7000 Unified in there (if we can get the price down far enough).<br> That?s not a bad option since all they really want is SMB shares in the<br> remote.? I just keep thinking a set of servers would do the trick and be<br> cheaper.<br><br><br><br> From: Zachary Giles <zgiles@gmail.com><br> Reply-To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org><br> Date: Friday, March 4, 2016 at 10:26 AM<br><br> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org><br> Subject: Re: [gpfsug-discuss] Small cluster<br><br> You can do FPO for non-Hadoop workloads. It just alters the disks below<br> the GPFS filesystem layer and looks like a normal GPFS system (mostly).<br> I do think there were some restrictions on non-FPO nodes mounting FPO<br> filesystems via multi-cluster.. not sure if those are still there.. any<br> input on that from IBM?<br><br> If small enough data, and with 3-way replication, it might just be wise<br> to do internal storage and 3x rep. A 36TB 2U server is ~$10K (just common<br> throwing out numbers), 3 of those per site would fit in your budget.<br><br> Again.. depending on your requirements, stability balance between<br> 'science experiment' vs production, GPFS knowledge level, etc etc...<br><br> This is actually an interesting and somewhat missing space for small<br> enterprises. If you just want 10-20TB active-active online everywhere,<br> say, for VMware, or NFS, or something else, there arent all that many<br> good solutions today that scale down far enough and are a decent price.<br> It's easy with many many PB, but small.. idk. I think the above sounds<br> good as anything without going SAN-crazy.<br><br><br><br> On Fri, Mar 4, 2016 at 11:21 AM, Mark.Bush@siriuscom.com <<br> Mark.Bush@siriuscom.com> wrote:<br> I guess this is really my question.? Budget is less than $50k per site<br> and they need around 20TB storage.? Two nodes with MD3 or something may<br> work.? But could it work (and be successful) with just servers and<br> internal drives?? Should I do FPO for non hadoop like workloads?? I<br> didn?t think I could get native raid except in the ESS (GSS no longer<br> exists if I remember correctly).? Do I just make replicas and call it<br> good?<br><br><br> Mark<br><br> From: <gpfsug-discuss-bounces@spectrumscale.org> on behalf of Marc A<br> Kaplan <makaplan@us.ibm.com><br> Reply-To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org><br> Date: Friday, March 4, 2016 at 10:09 AM<br> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org><br> Subject: Re: [gpfsug-discuss] Small cluster<br><br> Jon, I don't doubt your experience, but it's not quite fair or even<br> sensible to make a decision today based on what was available in the<br> GPFS 2.3 era.<br><br> We are now at GPFS 4.2 with support for 3 way replication and FPO.<br> Also we have Raid controllers, IB, and "Native Raid" and ESS, GSS<br> solutions and more.<br><br> So more choices, more options, making finding an "optimal" solution more<br> difficult.<br><br> To begin with, as with any provisioning problem, one should try to<br> state: requirements, goals, budgets, constraints, failure/tolerance<br> models/assumptions,<br> expected workloads, desired performance, etc, etc.<br><br><br><br><br> This message (including any attachments) is intended only for the use of<br> the individual or entity to which it is addressed and may contain<br> information that is non-public, proprietary, privileged, confidential,<br> and exempt from disclosure under applicable law. If you are not the<br> intended recipient, you are hereby notified that any use, dissemination,<br> distribution, or copying of this communication is strictly prohibited.<br> This message may be viewed by parties at Sirius Computer Solutions other<br> than those named in the message header. This message does not contain an<br> official representation of Sirius Computer Solutions. If you have<br> received this communication in error, notify Sirius Computer Solutions<br> immediately and (i) destroy this message if a facsimile or (ii) delete<br> this message immediately if this is an electronic communication. Thank<br> you.<br><br><br> Sirius Computer Solutions<br><br> _______________________________________________<br> gpfsug-discuss mailing list<br> gpfsug-discuss at spectrumscale.org<br> </tt><tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></tt><tt><br><br><br><br><br> --<br> Zach Giles<br> zgiles@gmail.com<br><br> _______________________________________________<br> gpfsug-discuss mailing list<br> gpfsug-discuss at spectrumscale.org<br> </tt><tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></tt><tt><br><br><br><br><br>--<br>Zach Giles<br>zgiles@gmail.com_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></tt><tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></tt><tt><br><br><br>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <</tt><tt><a href="http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160304/dd661d27/attachment.html">http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160304/dd661d27/attachment.html</a></tt><tt>><br>-------------- next part --------------<br>A non-text attachment was scrubbed...<br>Name: graycol.gif<br>Type: image/gif<br>Size: 105 bytes<br>Desc: not available<br>URL: <</tt><tt><a href="http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160304/dd661d27/attachment.gif">http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160304/dd661d27/attachment.gif</a></tt><tt>><br><br>------------------------------<br><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></tt><tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></tt><tt><br><br><br>End of gpfsug-discuss Digest, Vol 50, Issue 14<br>**********************************************<br><br></tt><br><br><BR>
</body></html>