[gpfsug-discuss] more than one mlx connectx-4 adapter in same host

Frank Kraemer kraemerf at de.ibm.com
Thu Dec 21 07:07:24 GMT 2017


David,

> We're trying to get 40 gbe connection between Mellanox switches and Cisco
> switches down at the other end of the machine room
> The BiDi part seems to be the best given about 30m  run on multimode.
> However Mellanox support says it's not supported.
> Want to use this to get close to IB speeds for GPFS on nodes that aren't
> on the IB fabric.
> Does anyone have any luck getting 40 or 100 gig at 20-30m when the
> switches are different brands?

maybe that's a good reason to get in contact with the team from Interoptic.
They claim a good expertise for these kind of problems - feedback is good.
http://packetpushers.net/podcast/podcasts/show-360-all-about-optics-interoptic-sponsored/
https://interoptic.com/

Frank Kraemer
IBM Consulting IT Specialist  / Client Technical Architect
Am Weiher 24, 65451 Kelsterbach
mailto:kraemerf at de.ibm.com
voice: +49-(0)171-3043699 / +4970342741078
IBM Germany
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20171221/31aeae52/attachment-0002.htm>


More information about the gpfsug-discuss mailing list