[gpfsug-discuss] gpfs native raid

Aaron Knister aaron.s.knister at nasa.gov
Tue Aug 30 18:16:03 BST 2016


Thanks Christopher. I've tried GPFS on zvols a couple times and the 
write throughput I get is terrible because of the required sync=always 
parameter. Perhaps a couple of SSD's could help get the number up, though.

-Aaron

On 8/30/16 12:47 PM, Christopher Maestas wrote:
> Interestingly enough, Spectrum Scale can run on zvols. Check out:
>
> http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf
>
> -cdm
>
> ------------------------------------------------------------------------
> On Aug 30, 2016, 9:17:05 AM, aaron.s.knister at nasa.gov wrote:
>
> From: aaron.s.knister at nasa.gov
> To: gpfsug-discuss at spectrumscale.org
> Cc:
> Date: Aug 30, 2016 9:17:05 AM
> Subject: [gpfsug-discuss] gpfs native raid
>
> Does anyone know if/when we might see gpfs native raid opened up for the
> masses on non-IBM hardware? It's hard to answer the question of "why
> can't GPFS do this? Lustre can" in regards to Lustre's integration with
> ZFS and support for RAID on commodity hardware.
> -Aaron
> --
> Aaron Knister
> NASA Center for Climate Simulation (Code 606.2)
> Goddard Space Flight Center
> (301) 286-2776 <tel:(301)%C2%A0286-2776>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>

-- 
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776



More information about the gpfsug-discuss mailing list