<font size=2 face="sans-serif">Thanks for spelling out the situation more
clearly. This is beyond my knowledge and expertise.</font><br><font size=2 face="sans-serif">But perhaps some other participants
on this forum will chime in!</font><br><br><font size=2 face="sans-serif">I may be missing something, but asking
"What is Lustre LNET?" via google does not yield good answers.</font><br><font size=2 face="sans-serif">It would be helpful to have some graphics
(pictures!) of typical, useful configurations. Limiting myself to
a few minutes of searching, I couldn't find any.</font><br><br><font size=2 face="sans-serif">I "get" that Lustre users/admin
with lots of nodes and several switching fabrics find it useful, but beyond
that...</font><br><br><font size=2 face="sans-serif">I guess the answer will be "Performance!"
-- but the obvious question is: Why not "just" use IP - that
is the Internetworking Protocol!</font><br><font size=2 face="sans-serif">So rather than sweat over LNET, why
not improve IP to work better over several IBs?</font><br><br><font size=2 face="sans-serif">From a user/customer point of view where
"I needed this yesterday", short of having an "LNET for
GPFS", I suggest considering reconfiguring your nodes, switches, storage
</font><br><font size=2 face="sans-serif">to get better performance. If
you need to buy some more hardware, so be it.<br></font><br><font size=2 face="sans-serif">--marc</font><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">Aaron Knister <aaron.s.knister@nasa.gov></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif"><gpfsug-discuss@spectrumscale.org></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">09/20/2016 09:23 AM</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Re: [gpfsug-discuss]
GPFS Routers</font><br><font size=1 color=#5f5f5f face="sans-serif">Sent by:
</font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr noshade><br><br><br><tt><font size=2>Hi Marc,<br><br>Currently we serve three disparate infiniband fabrics with three <br>separate sets of NSD servers all connected via FC to backend storage.<br><br>I was exploring the idea of flipping that on its head and having one set
<br>of NSD servers but would like something akin to Lustre LNET routers to
<br>connect each fabric to the back-end NSD servers over IB. I know there's
<br>IB routers out there now but I'm quite drawn to the idea of a GPFS <br>equivalent of Lustre LNET routers, having used them in the past.<br><br>I suppose I could always smush some extra HCAs in the NSD servers and do
<br>it that way but that got really ugly when I started factoring in <br>omnipath. Something like an LNET router would also be useful for GNR <br>users who would like to present to both an IB and an OmniPath fabric <br>over RDMA.<br><br>-Aaron<br><br>On 9/12/16 10:48 AM, Marc A Kaplan wrote:<br>> Perhaps if you clearly describe what equipment and connections you
have<br>> in place and what you're trying to accomplish, someone on this board
can<br>> propose a solution.<br>><br>> In principle, it's always possible to insert proxies/routers to "fake"<br>> any two endpoints into "believing" they are communicating
directly.<br>><br>><br>><br>><br>><br>> From: Aaron Knister <aaron.s.knister@nasa.gov><br>> To: <gpfsug-discuss@spectrumscale.org><br>> Date: 09/11/2016 08:01 PM<br>> Subject: Re: [gpfsug-discuss] GPFS Routers<br>> Sent by: gpfsug-discuss-bounces@spectrumscale.org<br>> ------------------------------------------------------------------------<br>><br>><br>><br>> After some googling around, I wonder if perhaps what I'm thinking
of was<br>> an I/O forwarding layer that I understood was being developed for
x86_64<br>> type machines rather than some type of GPFS protocol router or proxy.<br>><br>> -Aaron<br>><br>> On 9/11/16 5:02 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE<br>> CORP] wrote:<br>>> Hi Everyone,<br>>><br>>> A while back I seem to recall hearing about a mechanism being
developed<br>>> that would function similarly to Lustre's LNET routers and effectively<br>>> allow a single set of NSD servers to talk to multiple RDMA fabrics<br>>> without requiring the NSD servers to have infiniband interfaces
on each<br>>> RDMA fabric. Rather, one would have a set of GPFS gateway nodes
on each<br>>> fabric that would in effect proxy the RDMA requests to the NSD
server.<br>>> Does anyone know what I'm talking about? Just curious if it's
still on<br>>> the roadmap.<br>>><br>>> -Aaron<br>>><br>>><br>>> _______________________________________________<br>>> gpfsug-discuss mailing list<br>>> gpfsug-discuss at spectrumscale.org<br>>> </font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br>>><br>><br>> --<br>> Aaron Knister<br>> NASA Center for Climate Simulation (Code 606.2)<br>> Goddard Space Flight Center<br>> (301) 286-2776<br>> _______________________________________________<br>> gpfsug-discuss mailing list<br>> gpfsug-discuss at spectrumscale.org<br>> </font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br>><br>><br>><br>><br>><br>> _______________________________________________<br>> gpfsug-discuss mailing list<br>> gpfsug-discuss at spectrumscale.org<br>> </font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br>><br><br>-- <br>Aaron Knister<br>NASA Center for Climate Simulation (Code 606.2)<br>Goddard Space Flight Center<br>(301) 286-2776<br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br><br></font></tt><br><BR>