[gpfsug-discuss] Anybody running GPFS over iSCSI? -

Frank Kraemer kraemerf at de.ibm.com
Sun Dec 16 11:59:39 GMT 2018


Ethernet networking of today is changing very fast as the driving forces
are the "Hyperscale" datacenters. This big innovation is changing the world
and is happening right now. You must understand the conversation by
breaking down the differences between ASICs, FPGAs, and NPUs in modern
Ethernet networking.

1) Mellanox has a very good answer here based on the Spectrum-2 chip

2) Broadcom's answer to this is the 12.8 Tb/s StrataXGS Tomahawk 3 Ethernet
Switch Series

3) Barefoots Tofinio2 is another valid answer to this problem as it's
programmable with the P4 language (important for Hyperscale Datacenters)

The P4 language itself is open source. There’s details at p4.org, or you
can download code at GitHub: https://github.com/p4lang/

4) The last newcomer to this party comes from Innovium named Teralynx

(Most of the new Cisco switches are powered by the Teralynx silicon, as
Cisco seems to be late to this game with it's own development.)

So back your question - iSCSI is not the future! NVMe and it's variants is
the way to go and these new ethernet swichting products does have this in
Due to the performance demands of NVMe, high performance and low latency
networking is required and Ethernet based RDMA — RoCE, RoCEv2 or iWARP are
the leading choices.


P.S. My Xmas wishlist to the IBM Spectrum Scale development team would be a
"2019 HighSpeed Ethernet Networking optimization for Spectrum Scale" to
make use of all these new things and options :-)

Frank Kraemer
IBM Consulting IT Specialist  / Client Technical Architect
Am Weiher 24, 65451 Kelsterbach, Germany
mailto:kraemerf at de.ibm.com
Mobile +49171-3043699
IBM Germany
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20181216/cac53e7b/attachment-0001.htm>

More information about the gpfsug-discuss mailing list