[gpfsug-discuss] Policy scan against billion files for ILM/HSM

Jonathan Buzzard jonathan at buzzard.me.uk
Tue Apr 11 11:21:05 BST 2017


On Tue, 2017-04-11 at 00:49 -0400, Zachary Giles wrote:

[SNIP]

> * Then throw ~8 well tuned Infiniband attached nodes at it using -N,
> If they're the same as the NSD servers serving the flash, even better.
> 

Exactly how much are you going to gain from Infiniband over 40Gbps or
even 100Gbps Ethernet? Not a lot I would have thought. Even with flash
all your latency is going to be in the flash not the Ethernet.

Unless you have a compute cluster and need Infiniband for the MPI
traffic, it is surely better to stick to Ethernet. Infiniband is rather
esoteric, what I call a minority sport best avoided if at all possible.

Even if you have an Infiniband fabric, I would argue that give current
core counts and price points for 10Gbps Ethernet, that actually you are
better off keeping your storage traffic on the Ethernet, and reserving
the Infiniband for MPI duties. That is 10Gbps Ethernet to the compute
nodes and 40/100Gbps Ethernet on the storage nodes.

JAB.

-- 
Jonathan A. Buzzard                 Email: jonathan (at) buzzard.me.uk
Fife, United Kingdom.




More information about the gpfsug-discuss mailing list