[gpfsug-discuss] metadata replication question

Simon Thompson (Research Computing - IT Services) S.J.Thompson at bham.ac.uk
Sun Jan 3 21:56:26 GMT 2016


I currently have 4 NSD servers in a cluster, two pairs in two data centres. Data and metadata replication is currently set to 2 with metadata sitting on sas drivers in a storewise array. I also have a vm floating between the two data centres to guarantee quorum in one only in the event of split brain.

Id like to add some ssd for metadata.

Should I:

Add raid1 ssd to the storewise?

Add local ssd to the nsd servers?

If I did the second, should I 
 add ssd to each nsd server (not raid 1) and set each in a different failure group and make metadata replication of 4.
 add ssd to each nsd server as raid 1, use the same failure group for each data centre pair?
 add ssd to each nsd server not raid 1, use the dame failure group for each data centre pair?

Or something else entirely?

What I want so survive is a split data centre situation or failure of a single nsd server at any point...

Thoughts? Comments?

I'm thinking the first of the nsd local options uses 4 writes as does the second, but each nsd server then has a local copy of the metatdata locally and ssd fails, in which case it should be able to get it from its local partner pair anyway (with readlocalreplica)?

Id like a cost competitive solution that gives faster performance than the current sas drives.

Was also thinking I might add an ssd to each nsd server for system.log pool for hawc as well...

Thanks

Simon


More information about the gpfsug-discuss mailing list