[gpfsug-discuss] Preferred NSD

Michal Zacek zacekm at img.cas.cz
Wed Mar 14 10:57:36 GMT 2018


Hi,

I don't think the GPFS is good choice for your setup. Did you consider
GlusterFS? It's used at Max Planck Institute at Dresden for HPC
computing of  Molecular Biology data. They have similar setup,  tens
(hundreds) of computers with shared local storage in glusterfs. But you
will need 10Gb network.

Michal


Dne 12.3.2018 v 16:23 Lukas Hejtmanek napsal(a):
> On Mon, Mar 12, 2018 at 11:18:40AM -0400, valdis.kletnieks at vt.edu wrote:
>> On Mon, 12 Mar 2018 15:51:05 +0100, Lukas Hejtmanek said:
>>> I don't think like 5 or more data/metadata replicas are practical here. On the
>>> other hand, multiple node failures is something really expected.
>> Umm.. do I want to ask *why*, out of only 60 nodes, multiple node
>> failures are an expected event - to the point that you're thinking
>> about needing 5 replicas to keep things running?
> as of my experience with cluster management, we have multiple nodes down on
> regular basis. (HW failure, SW maintenance and so on.)
>
> I'm basically thinking that 2-3 replicas might not be enough while 5 or more
> are becoming too expensive (both disk space and required bandwidth being
> scratch space - high i/o load expected). 
>


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3776 bytes
Desc: Elektronicky podpis S/MIME
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180314/ceefc8c3/attachment-0002.bin>


More information about the gpfsug-discuss mailing list