[gpfsug-discuss] WAS: alternative path; Now: RDMA

Alec anacreo at gmail.com
Sun Dec 12 02:19:02 GMT 2021


I feel the need to respond here...  I see many responses on this User Group
forum that are dismissive of the fringe / extreme use cases and of the
"what do you need that for '' mindset.  The thing is that Spectrum Scale is
for the extreme, just take the word "Parallel" in the old moniker that was
already an extreme use case.

If you have a standard workload, then sure most of the complex features of
the file system are toys, but many of us DO have extreme workloads where
shaking out every ounce of performance is a worthwhile and financially
sound endeavor.  It is also because of the efforts of those of us living on
the cusp of technology that these technologies become mainstream and
no-longer extreme.

I have an AIX LPAR that traverses more than 300TB+ of data a day on a
Spectrum Scale file system, it is fully virtualized, and handles a million
files.  If that performance level drops, regulatory reports will be
late, business decisions won't be current.  However, the systems of today
and the future have to traverse this much data and if they are slow then
they can't keep up with real-time data feeds.  So the difference between an
RDMA disk IO vs a non RDMA disk IO could possibly mean what level of
analytics are done to perform real time fraud prevention.  Or at what cost,
today many systems achieve this by keeping everything in memory in HUGE
farms..  Being able to perform data operations at 30GB/s means you can
traverse ALL of the census bureau data for all time from the US Govt in
about 2 seconds... that's a pretty substantial capability that moves the
bar forward in what we can do from a technology perspective.

I just did a technology garage with IBM where we were able to achieve
1.5TB/writes on an encrypted ESS off of a single VMWare Host and 4 VM's
over IP... That's over 2PB of data writes a day on a single VM server.
Being able to demonstrate that there are production virtualized
environments capable of this type of capacity, helps to show where the
point of engineering a proper storage architecture outweighs the benefits
of just throwing more GPU compute farms at the problem with ever dithering
disk I/O.  It also helps to demonstrate how a virtual storage optimized
farm could be leveraged to host many in-memory or data analytic heavy
workloads in a shared configuration.

Douglas's response is the right one, how much IO does the application /
environment need, it's nice to see Spectrum Scale have the flexibility to
deliver.  I'm pretty confident that if I can't deliver the required I/O
performance on Spectrum Scale, nobody else can on any other storage
platform within reasonable limits.

Alec Effrat

On Thu, Dec 9, 2021 at 8:24 PM Douglas O'flaherty <douglasof at us.ibm.com>
wrote:

> Jonathan:
>
> You posed a reasonable question, which was "when is RDMA worth the
> hassle?"  I agree with part of your premises, which is that it only matters
> when the bottleneck isn't somewhere else. With a parallel file system, like
> Scale/GPFS, the absolute performance bottleneck is not the throughput of a
> single drive. In a majority of Scale/GPFS clusters the network data path is
> the performance limitation. If they deploy HDR or 100/200/400Gbps
> Ethernet...  At that point, the buffer copy time inside the server matters.
>
> When the device is an accelerator, like a GPU, the benefit of RDMA (GDS)
> is easily demonstrated because it eliminates the bounce copy through the
> system memory. In our NVIDIA DGX A100 server testing testing we were able
> to get around 2x the per system throughput by using RDMA direct to GPU (GUP
> Direct Storage). (Tested on 2 DGX system with 4x HDR links per storage
> node.)
>
> However, your question remains. Synthetic benchmarks are good indicators
> of technical benefit, but do your users and applications need that extra
> performance?
>
> These are probably only a handful of codes in organizations that need
> this. However, they are high-value use cases. We have client applications
> that either read a lot of data semi-randomly and not-cached - think
> mini-Epics for scaling ML training. Or, demand lowest response time, like
> production inference on voice recognition and NLP.
>
> If anyone has use cases for GPU accelerated codes with truly demanding
> data needs, please reach out directly. We are looking for more use cases to
> characterize the benefit for a new paper. f you can provide some code
> examples, we can help test if RDMA direct to GPU (GPUdirect Storage) is a
> benefit.
>
> Thanks,
>
> doug
>
> Douglas O'Flaherty
> douglasof at us.ibm.com
>
>
>
>
>
>
> ----- Message from Jonathan Buzzard <jonathan.buzzard at strath.ac.uk> on
> Fri, 10 Dec 2021 00:27:23 +0000 -----
>
> *To:*
> gpfsug-discuss at spectrumscale.org
>
> *Subject:*
> Re: [gpfsug-discuss]
> On 09/12/2021 16:04, Douglas O'flaherty wrote:
> >
> > Though not directly about your design, our work with NVIDIA on GPUdirect
> > Storage and SuperPOD has shown how sensitive RDMA (IB & RoCE) to both
> > MOFED and Firmware version compatibility can be.
> >
> > I would suggest anyone debugging RDMA issues should look at those
> closely.
> >
> May I ask what are the alleged benefits of using RDMA in GPFS?
>
> I can see there would be lower latency over a plain IP Ethernet or IPoIB
> solution but surely disk latency is going to swamp that?
>
> I guess SSD drives might change that calculation but I have never seen
> proper benchmarks comparing the two, or even better yet all four
> connection options.
>
> Just seems a lot of complexity and fragility for very little gain to me.
>
>
> JAB.
>
> --
> Jonathan A. Buzzard                         Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
>
>
>
>
> ----- Original message -----
> From: "Jonathan Buzzard" <jonathan.buzzard at strath.ac.uk>
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> To: gpfsug-discuss at spectrumscale.org
> Cc:
> Subject: [EXTERNAL] Re: [gpfsug-discuss] alternate path between ESS
> Servers for Datamigration
> Date: Fri, Dec 10, 2021 10:27
>
> On 09/12/2021 16:04, Douglas O'flaherty wrote:
> >
> > Though not directly about your design, our work with NVIDIA on GPUdirect
> > Storage and SuperPOD has shown how sensitive RDMA (IB & RoCE) to both
> > MOFED and Firmware version compatibility can be.
> >
> > I would suggest anyone debugging RDMA issues should look at those
> closely.
> >
> May I ask what are the alleged benefits of using RDMA in GPFS?
>
> I can see there would be lower latency over a plain IP Ethernet or IPoIB
> solution but surely disk latency is going to swamp that?
>
> I guess SSD drives might change that calculation but I have never seen
> proper benchmarks comparing the two, or even better yet all four
> connection options.
>
> Just seems a lot of complexity and fragility for very little gain to me.
>
>
> JAB.
>
> --
> Jonathan A. Buzzard                         Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> *http://gpfsug.org/mailman/listinfo/gpfsug-discuss*
> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
>
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20211211/d3b31e5d/attachment-0002.htm>


More information about the gpfsug-discuss mailing list