[gpfsug-discuss] Use of commodity HDs on large GPFS client base clusters?

Oesterlin, Robert Robert.Oesterlin at nuance.com
Tue Mar 15 20:42:59 GMT 2016


Hi Jamie

I have some fairly large clusters (tho not as large as you describe) running on “roll your own” storage subsystem of various types. You’re asking a broad question here on performance and rebuild times. I can’t speak to a comparison with ESS (I’m sure IBM can comment) but if you want to discuss some of my experiences with larger clusters, HD, performace (multi PB) I’d be happy to do so. You can drop me a note: robert.oesterlin at nuance.com and we can chat at length.

Bob Oesterlin
Sr Storage Engineer, Nuance HPC Grid



From: <gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>> on behalf of Jaime Pinto <pinto at scinet.utoronto.ca<mailto:pinto at scinet.utoronto.ca>>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date: Tuesday, March 15, 2016 at 2:39 PM
To: "gpfsug-discuss at gpfsug.org<mailto:gpfsug-discuss at gpfsug.org>" <gpfsug-discuss at gpfsug.org<mailto:gpfsug-discuss at gpfsug.org>>
Subject: [gpfsug-discuss] Use of commodity HDs on large GPFS client base clusters?

I'd like to hear about performance consideration from sites that may
be using "non-IBM sanctioned" storage hardware or appliance, such as
DDN, GSS, ESS (we have all of these).

For instance, how could that compare with ESS, which I understand has
some sort of "dispersed parity" feature, that substantially diminishes
rebuilt time in case of HD failures.

I'm particularly interested on HPC sites with 5000+ clients mounting
such commodity NSD's+HD's setup.

Thanks
Jaime


---
Jaime Pinto
SciNet HPC Consortium  - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
Toronto, ON, M5T1W5
P: 416-978-2755
C: 416-505-1477

----------------------------------------------------------------
This message was sent using IMP at SciNet Consortium, University of Toronto.


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=CwICAg&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=UIC7jY_blq8j34WiQM1a8cheHzbYW0sYS-ofA3if_Hk&s=MtunFkJSGpXWNdEkMqluTY-CYIC4uaMz7LiZ7JFob8c&e=

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160315/7ae603f8/attachment-0005.htm>


More information about the gpfsug-discuss mailing list