[gpfsug-discuss] Building GPFS filesystem system data pool on shared nothing NVMe drives

David Johnson david_johnson at brown.edu
Tue Jul 30 12:46:14 BST 2019


Can we confirm the requirement for disks per RG?  I have 4 RG, but only 6 x 3TB NVMe drives per box.

> On Jul 29, 2019, at 1:34 PM, Luis Bolinches <luis.bolinches at fi.ibm.com> wrote:
> 
> Hi, from phone so sorry for typos. 
> 
> I really think you should look into Spectrum Scale Erasure Code Edition (ECE) for this. 
> 
> Sure you could do a RAID on each node as you mention here but that sounds like a lot of waste to me on storage capacity. Not to forget you get other goodies like end to end checksum and rapid rebuilds with ECE, among others. 
> 
> Four servers is the minimum requirement for ECE (4+3p) and from top of my head 12 disk per RG, you are fine with both requirements. 
> 
> There is a presentation on ECE on the user group web page from London May 2019 were we talk about ECE. 
> 
> And the ibm page of the product https://www.ibm.com/support/knowledgecenter/STXKQY_ECE_5.0.3/com.ibm.spectrum.scale.ece.v5r03.doc/b1lece_intro.htm <https://www.ibm.com/support/knowledgecenter/STXKQY_ECE_5.0.3/com.ibm.spectrum.scale.ece.v5r03.doc/b1lece_intro.htm>
> --
> Cheers
> 
> El 29 jul 2019, a las 19:06, David Johnson <david_johnson at brown.edu <mailto:david_johnson at brown.edu>> escribió:
> 
>> We are planning a 5.0.x upgrade onto new hardware to make use of the new 5.x GPFS features.
>> The goal is to use up to four NSD nodes for metadata, each one with 6 NVMe drives (to be determined
>> whether we use Intel VROC for raid 5 or raid 1, or just straight disks).  
>> 
>> So questions — 
>> Has anyone done system pool on shared nothing cluster?  How did you set it up?
>> With default metadata replication set at 3, can you make use of four NSD nodes effectively?
>> How would one design the location vectors and failure groups so that the system metadata is
>> spread evenly across the four servers?
>> 
>> Thanks,
>> — ddj
>> Dave Johnson
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss <http://gpfsug.org/mailman/listinfo/gpfsug-discuss> 
>> 
> 
> Ellei edellä ole toisin mainittu: / Unless stated otherwise above:
> Oy IBM Finland Ab
> PL 265, 00101 Helsinki, Finland
> Business ID, Y-tunnus: 0195876-3 
> Registered in Finland
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20190730/b61f3695/attachment-0002.htm>


More information about the gpfsug-discuss mailing list