[gpfsug-discuss] Building GPFS filesystem system data pool on shared nothing NVMe drives

David Johnson david_johnson at brown.edu
Mon Jul 29 17:04:38 BST 2019


We are planning a 5.0.x upgrade onto new hardware to make use of the new 5.x GPFS features.
The goal is to use up to four NSD nodes for metadata, each one with 6 NVMe drives (to be determined
whether we use Intel VROC for raid 5 or raid 1, or just straight disks).  

So questions — 
Has anyone done system pool on shared nothing cluster?  How did you set it up?
With default metadata replication set at 3, can you make use of four NSD nodes effectively?
How would one design the location vectors and failure groups so that the system metadata is
spread evenly across the four servers?

Thanks,
 — ddj
Dave Johnson


More information about the gpfsug-discuss mailing list