[gpfsug-discuss] GPFS and Lustre on same node

Sergi Moré Codina sergi.more at bsc.es
Fri Aug 8 14:14:33 BST 2014


Hi all,

About main differences between GPFS and Lustre, here you have some bits 
from our experience:

-Reliability: GPFS its been proved to be more stable and reliable. Also 
offers more flexibility in terms of fail-over. It have no restriction in 
number of servers. As far as I know, an NSD can have as many secondary 
servers as you want (we are using 8).

-Metadata: In Lustre each file system is restricted to two servers. No 
restriction in GPFS.

-Updates: In GPFS you can update the whole storage cluster without 
stopping production, one server at a time.

-Server/Client role: As Jeremy said, in GPFS every server act as a 
client as well. Useful for administrative tasks.

-Troubleshooting: Problems with GPFS are easier to track down. Logs are 
more clear, and offers better tools than Lustre.

-Support: No problems at all with GPFS support. It is true that it could 
take time to go up within all support levels, but we always got a good 
solution. Quite different in terms of hardware. IBM support quality has 
drop a lot since about last year an a half. Really slow and tedious 
process to get replacements. Moreover, we keep receiving bad "certified 
reutilitzed parts" hardware, which slow the whole process even more.


These are the main differences I would stand out after some years of 
experience with both file systems, but do not take it as a fact.

PD: Salvatore, I would suggest you to contact Jordi Valls. He joined EBI 
a couple of months ago, and has experience working with both file 
systems here at BSC.

Best Regards,
Sergi.


On 08/08/2014 01:40 PM, Jeremy Robst wrote:
> On Fri, 8 Aug 2014, Salvatore Di Nardo wrote:
>
>> Now, skipping all this GSS rant, which have nothing to do with the file
>> system anyway  and  going back to my question:
>>
>> Could someone point the main differences between GPFS and Lustre?
>
> I'm looking at making the same decision here - to buy GPFS or to roll
> our own Lustre configuration. I'm in the process of setting up test
> systems, and so far the main difference seems to be in the that in GPFS
> each server sees the full filesystem, and so you can run other
> applications (e.g backup) on a GPFS server whereas the Luste OSS (object
> storage servers) see only a portion of the storage (the filesystem is
> striped across the OSSes), so you need a Lustre client to mount the full
> filesystem for things like backup.
>
> However I have very little practical experience of either and would also
> be interested in any comments.
>
> Thanks
>
> Jeremy
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>


-- 

------------------------------------------------------------------------

      Sergi More Codina
      Barcelona Supercomputing Center
      Centro Nacional de Supercomputacion
      WWW: http://www.bsc.es      Tel: +34-93-405 42 27
      e-mail: sergi.more at bsc.es   Fax: +34-93-413 77 21

------------------------------------------------------------------------

WARNING / LEGAL TEXT: This message is intended only for the use of the
individual or entity to which it is addressed and may contain
information which is privileged, confidential, proprietary, or exempt
from disclosure under applicable law. If you are not the intended
recipient or the person responsible for delivering the message to the
intended recipient, you are strictly prohibited from disclosing,
distributing, copying, or in any way using this message. If you have
received this communication in error, please notify the sender and
destroy and delete any copies you may have received.

http://www.bsc.es/disclaimer.htm


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3242 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20140808/ccba0783/attachment-0003.bin>


More information about the gpfsug-discuss mailing list