[gpfsug-discuss] InfiniBand and OmiPath NSD servers
Sean Mc Grath
smcgrat at tcd.ie
Fri Sep 6 16:52:55 BST 2024
Hi,
We have a GPFS cluster where the NSD servers mount the storage over fibre channel and export the file system over InfiniBand for clients.
We will be getting some used equipment that uses OmniPath.
As per the "IBM Storage Scale Frequently Asked Questions and Answers" it states [1]:
> RDMA is not supported on a node when both Mellanox HCAs and Cornelis Networks Omni-Path HFIs are enabled for RDMA.
Does this mean that we wouldn't be able to consolidate both IB & OPA HCA's in NSD servers and would have to have 2 types of NSD servers? 1) InfiniBand exporting and 2) OmniPath exporting?
If so, is it then a matter of using the Multi-Rail over TCP "subnets =" setting in mmchonfig to distinguish which nsd server the clients should connect to? [2].
Or am I completely miss understanding all this?
Many thanks in advance.
Sean
[1] https://www.ibm.com/docs/en/STXKQY/pdf/gpfsclustersfaq.pdf
[2] https://www.ibm.com/docs/en/storage-scale/5.1.6?topic=configuring-multi-rail-over-tcp-mrot
---
Sean McGrath
smcgrat at tcd.ie
Senior Systems Administrator
Research IT, IT Services, Trinity College Dublin
https://www.tcd.ie/itservices/
https://www.tchpc.tcd.ie/
More information about the gpfsug-discuss
mailing list