[gpfsug-discuss] Joining RDMA over different networks?

Kidger, Daniel daniel.kidger at hpe.com
Mon Aug 21 19:43:03 BST 2023


Ryan,

This sounds very interesting.
Do you have more details or references of how they connected together, and what any pain points were?

Daniel


From: gpfsug-discuss <gpfsug-discuss-bounces at gpfsug.org> On Behalf Of Ryan Novosielski
Sent: 21 August 2023 19:07
To: gpfsug main discussion list <gpfsug-discuss at gpfsug.org>
Cc: gpfsug-discuss at spectrumscale.org
Subject: Re: [gpfsug-discuss] Joining RDMA over different networks?

If I understand what you’re asking correctly, we used to have a cluster that did this. GPFS was on Infininiband, some of the compute nodes were too, and the rest were on Omnipath. There were routers in between with both types.
Sent from my iPhone


On Aug 21, 2023, at 13:55, Kidger, Daniel <daniel.kidger at hpe.com<mailto:daniel.kidger at hpe.com>> wrote:


I know in the Lustre world that LNET routers are used to provide RDMA over heterogeneous networks.

Is there an equivalent for Storage Scale?
eg if an ESS uses Infiniband to connect directly to Cluster A, could that InfiniBand RDMA fabric be “routed” to ClusterB that has RoCE connecting all its nodes together and hence the filesystem mounted?

ps. The same question would apply to other usually incompatible RDMA networks like Omnipath, Slingshot, Cornelis, … ?

Daniel

Daniel Kidger
HPC Storage Solutions Architect, EMEA
daniel.kidger at hpe.com<mailto:daniel.kidger at hpe.com>

+44 (0)7818 522266

hpe.com<http://www.hpe.com/>

<image001.png>

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org<http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20230821/efe48b23/attachment-0002.htm>


More information about the gpfsug-discuss mailing list