[gpfsug-discuss] more than one mlx connectx-4 adapter in same host

Simon Thompson (IT Research Support) S.J.Thompson at bham.ac.uk
Wed Dec 20 20:45:37 GMT 2017


I can't remember if this was on mlx4 or mlx5 driver cards, but we found we had to use LINKDELAY=20 when using bonding for Ethernet.

Simon
________________________________________
From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of J. Eric Wonderley [eric.wonderley at vt.edu]
Sent: 20 December 2017 20:37
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] more than one mlx connectx-4 adapter in same      host

Just plain tcpip.

We have dual port connectx4s in our nsd servers.  Upon adding a second connectx4 hba...no links go up or show "up".  I have one port on each hba configured for eth and ibv_devinfo looks sane.

I cannot find anything indicating that this should not work.  I have a ticket opened with mellanox.

On Wed, Dec 20, 2017 at 3:25 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP] <aaron.s.knister at nasa.gov<mailto:aaron.s.knister at nasa.gov>> wrote:


We’ve done a fair amount of VPI work but admittedly not with connectx4. Is it possible the cards are trying to talk IB rather than Eth? I figured you’re Ethernet based because of the mention of Juniper.

Are you attempting to do RoCE or just plain TCP/IP?


On December 20, 2017 at 14:40:48 EST, J. Eric Wonderley <eric.wonderley at vt.edu<mailto:eric.wonderley at vt.edu>> wrote:
Hello:

Does anyone have this type of config?

The host configuration looks sane but we seem to observe link-down on all mlx adapters no matter what we do.

Big picture is that we are attempting to do mc(multichassis)-lags to a core switch.  I'm somewhat fearful as to how this is implemented in the juniper switch we are about to test.

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss





More information about the gpfsug-discuss mailing list