[gpfsug-discuss] WekaIO Unveils Cloud-Native Scalable File System

Sven Oehme oehmes at gmail.com
Wed Jul 12 18:24:19 BST 2017


while i really like competition on SpecSFS, the claims from the WekaIO
people are lets say 'alternative facts' at best
The Spectrum Scale results were done on 4 Nodes with 2 Flash Storage
devices attached, they compare this to a WekaIO system with 14 times more
memory (14 TB vs 1TB) , 120 SSD's (vs 64 Flashcore Modules) across 15 times
more compute nodes (60 vs 4) .
said all this, the article claims 1000 builds, while the actual submission
only delivers 500 --> https://www.spec.org/sfs2014/results/sfs2014.html
so they need 14 times more memory and cores and 2 times flash to show twice
as many builds at double the response time, i leave this to everybody who
understands this facts to judge how great that result really is.
Said all this, Spectrum Scale scales almost linear if you double the nodes
, network and storage accordingly, so there is no reason to believe we
couldn't easily beat this, its just a matter of assemble the HW in a lab
and run the test. btw we scale to 10k+ nodes , 2500 times the number we
used in our publication :-D

Sven

On Wed, Jul 12, 2017 at 9:06 AM Oesterlin, Robert <
Robert.Oesterlin at nuance.com> wrote:

> Interesting. Performance is one thing, but how usable. IBM, watch your
> back :-)
>
>
>
> *“WekaIO is the world’s fastest distributed file system, processing four
> times the workload compared to IBM Spectrum Scale measured on Standard
> Performance Evaluation Corp. (SPEC) SFS 2014, an independent industry
> benchmark. Utilizing only 120 cloud compute instances with locally attached
> storage, WekaIO completed 1,000 simultaneous software builds compared to
> 240 on IBM’s high-end FlashSystem 900.”*
>
>
>
>
> https://www.hpcwire.com/off-the-wire/wekaio-unveils-cloud-native-scalable-file-system/
>
>
>
> Bob Oesterlin
> Sr Principal Storage Engineer, Nuance
> 507-269-0413 <(507)%20269-0413>
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170712/6514125a/attachment-0002.htm>


More information about the gpfsug-discuss mailing list