[gpfsug-discuss] more than one mlx connectx-4 adapter in samehost

Sven Oehme oehmes at gmail.com
Thu Dec 21 01:09:19 GMT 2017


i don't know if that works with cisco, but i use 50 an 100m cables for 40
as well as 100Gbit in my lab between 2 Mellanox switches :
http://www.mellanox.com/products/interconnect/ethernet-active-optical-cables.php
as paul pointed out one of the very first things one needs to do after
adding an adapter is to flash the firmware to a recent level. especially of
you have 2 adapters with different FW i have seen even once with higher
level not work properly, so before you do anything else get them to a
recent level and especially the same if its the same adapter types.

sven

On Wed, Dec 20, 2017 at 10:01 PM David D Johnson <david_johnson at brown.edu>
wrote:

> We're trying to get 40 gbe connection between Mellanox switches and Cisco
> switches down at the other end of the machine room
> The BiDi part seems to be the best given about 30m  run on multimode.
> However Mellanox support says it's not supported.
> Want to use this to get close to IB speeds for GPFS on nodes that aren't
> on the IB fabric.
> Does anyone have any luck getting 40 or 100 gig at 20-30m when the
> switches are different brands?
>
> Thanks,
>  -- ddj
>
> On Dec 20, 2017, at 4:53 PM, Sanchez, Paul <Paul.Sanchez at deshaw.com>
> wrote:
>
> We have run multiple ConnectX-4 NICs in bonded MLAG (Arista) and VPC
> (Cisco) switch configurations on our NSD servers.  We used to see issues
> with firmware versions that didn’t support the optics we wanted to use
> (e.g. early CX3/CX4 and Cisco 40G-BiDi).  You may also want check mstflint
> to see whether the firmware levels match on the MLX cards, and if you
> upgrade firmware in some cases a power-cycle (not reboot) can be required
> to finish the process.
>
> -Paul
>
> *From:* gpfsug-discuss-bounces at spectrumscale.org [
> mailto:gpfsug-discuss-bounces at spectrumscale.org
> <gpfsug-discuss-bounces at spectrumscale.org>]*On Behalf Of *Andrew Beattie
> *Sent:* Wednesday, December 20, 2017 4:47 PM
> *To:* gpfsug-discuss at spectrumscale.org
> *Subject:* Re: [gpfsug-discuss] more than one mlx connectx-4 adapter in
> samehost
>
> IBM ESS building blocks can have up to 3 dual port 10GBEth, 40GB Eth, 56GB
> IB, 100GB IB Mlx adapater cards, because we have 2 IO nodes this is up to a
> total of 12 ports per building block
> so there should not be any reason for this to fail.
>
> I regularly see a Mix of 10GB / 40GB or 10GB / IB configurations
>
>
>
> Regards
> *Andrew Beattie*
> *Software Defined Storage  - IT Specialist*
> *Phone: *614-2133-7927
> *E-mail: *abeattie at au1.ibm.com
>
>
>
> ----- Original message -----
> From: "J. Eric Wonderley" <eric.wonderley at vt.edu>
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Cc:
> Subject: Re: [gpfsug-discuss] more than one mlx connectx-4 adapter in same
> host
> Date: Thu, Dec 21, 2017 6:37 AM
>
> Just plain tcpip.
>
> We have dual port connectx4s in our nsd servers.  Upon adding a second
> connectx4 hba...no links go up or show "up".  I have one port on each hba
> configured for eth and ibv_devinfo looks sane.
>
> I cannot find anything indicating that this should not work.  I have a
> ticket opened with mellanox.
>
> On Wed, Dec 20, 2017 at 3:25 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER
> SCIENCE CORP] <aaron.s.knister at nasa.gov> wrote:
>
>
>
>
> We’ve done a fair amount of VPI work but admittedly not with connectx4. Is
> it possible the cards are trying to talk IB rather than Eth? I figured
> you’re Ethernet based because of the mention of Juniper.
>
> Are you attempting to do RoCE or just plain TCP/IP?
>
> On December 20, 2017 at 14:40:48 EST, J. Eric Wonderley <
> eric.wonderley at vt.edu> wrote:
>
> Hello:
>
> Does anyone have this type of config?
>
> The host configuration looks sane but we seem to observe link-down on all
> mlx adapters no matter what we do.
>
> Big picture is that we are attempting to do mc(multichassis)-lags to a
> core switch.  I'm somewhat fearful as to how this is implemented in the
> juniper switch we are about to test.
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=STXkGEO2XATS_s2pRCAAh2wXtuUgwVcx1XjUX7ELNdk&m=Olk0lQk7rek9IplOjJ_2Vcd7P1LgVbnrSupC7O0hJHQ&s=I5Dq2T7aYvC87Wp12fsz6CRLw4uo2-RVnrnpxRYfYuA&e=>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=STXkGEO2XATS_s2pRCAAh2wXtuUgwVcx1XjUX7ELNdk&m=Olk0lQk7rek9IplOjJ_2Vcd7P1LgVbnrSupC7O0hJHQ&s=hxNNpOkwGQ9zRmTnM3FEo5hgnPSUsPG0FNqZbK6eA6Q&e=>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=STXkGEO2XATS_s2pRCAAh2wXtuUgwVcx1XjUX7ELNdk&m=Olk0lQk7rek9IplOjJ_2Vcd7P1LgVbnrSupC7O0hJHQ&s=hxNNpOkwGQ9zRmTnM3FEo5hgnPSUsPG0FNqZbK6eA6Q&e=
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20171221/ef7b4e3c/attachment-0002.htm>


More information about the gpfsug-discuss mailing list