[gpfsug-discuss] Intel Whitepaper - Spectrum Scale & LROC with NVMe

Matt Weil mweil at wustl.edu
Tue Dec 6 16:40:25 GMT 2016


Hello all,

Thanks for sharing that. I am setting this up on our CES nodes.  In this example the nvme devices are not persistent.  RHEL's default udev rules put them in /dev/disk/by-id/ persistently by serial number so I modified mmdevdiscover to look for them there.  What are others doing? custom udev rules for the nvme devices?

Also I have used LVM in the past to stitch multiple nvme together for better performance.  I am wondering in the use case with GPFS that it may hurt performance by hindering the ability of GPFS to do direct IO or directly accessing memory.  Any opinions there?

Thanks

Matt

On 12/5/16 10:33 AM, Ulf Troppens wrote:

FYI ... in case not seen .... benchmark for LROC with NVMe
http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/performance-gains-ibm-spectrum-scale.pdf


--
IBM Spectrum Scale Development - Client Engagements & Solutions Delivery
Consulting IT Specialist
Author "Storage Networks Explained"

IBM Deutschland Research & Development GmbH
Vorsitzende des Aufsichtsrats: Martina Koederitz
Geschäftsführung: Dirk Wittkopp
Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294





_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



________________________________
The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161206/729ea9ac/attachment-0002.htm>


More information about the gpfsug-discuss mailing list