[gpfsug-discuss] Policy scan against billion files for ILM/HSM

Zachary Giles zgiles at gmail.com
Tue Apr 11 12:50:26 BST 2017


Yeah, that can be true. I was just trying to show the size/shape that
can achieve this. There's a good chance 10G or 40G ethernet would
yield similar results, especially if you're running the policy on the
NSD servers.

On Tue, Apr 11, 2017 at 6:21 AM, Jonathan Buzzard
<jonathan at buzzard.me.uk> wrote:
> On Tue, 2017-04-11 at 00:49 -0400, Zachary Giles wrote:
>
> [SNIP]
>
>> * Then throw ~8 well tuned Infiniband attached nodes at it using -N,
>> If they're the same as the NSD servers serving the flash, even better.
>>
>
> Exactly how much are you going to gain from Infiniband over 40Gbps or
> even 100Gbps Ethernet? Not a lot I would have thought. Even with flash
> all your latency is going to be in the flash not the Ethernet.
>
> Unless you have a compute cluster and need Infiniband for the MPI
> traffic, it is surely better to stick to Ethernet. Infiniband is rather
> esoteric, what I call a minority sport best avoided if at all possible.
>
> Even if you have an Infiniband fabric, I would argue that give current
> core counts and price points for 10Gbps Ethernet, that actually you are
> better off keeping your storage traffic on the Ethernet, and reserving
> the Infiniband for MPI duties. That is 10Gbps Ethernet to the compute
> nodes and 40/100Gbps Ethernet on the storage nodes.
>
> JAB.
>
> --
> Jonathan A. Buzzard                 Email: jonathan (at) buzzard.me.uk
> Fife, United Kingdom.
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss



-- 
Zach Giles
zgiles at gmail.com



More information about the gpfsug-discuss mailing list