[gpfsug-discuss] Used virtualization technologies for GPFS/Spectrum Scale

service at metamodul.com service at metamodul.com
Mon Apr 24 13:21:09 BST 2017


Hi Jonathan
todays hardware is so powerful that imho it might make sense to split a CEC into more "piece". For example the IBM S822L has up to 2x12 cores, 9 PCI3 slots ( 4×16 lans & 5×8 lan ).
I think that such a server is a little bit to big  just to be a single NSD server.
Note that i use for each GPFS service a dedicated node.
So if i would go for 4 NSD server, 6 protocol nodes and 2 tsm backup nodes and at least 3 test server a total of 11 server is needed.
Inhm 4xS822L could handle this and a little bit more quite well.

Of course blade technology could be used or 1U server.

With kind regards
Hajo

-- 
Unix Systems Engineer
MetaModul GmbH
+49 177 4393994

<div>-------- Ursprüngliche Nachricht --------</div><div>Von: Jonathan Buzzard <jonathan at buzzard.me.uk> </div><div>Datum:2017.04.24  13:14  (GMT+01:00) </div><div>An: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org> </div><div>Betreff: Re: [gpfsug-discuss] Used virtualization technologies for
  GPFS/Spectrum Scale </div><div>
</div>On Mon, 2017-04-24 at 12:28 +0200, Hans-Joachim Ehlers wrote:
> @All
> 
> 
> does anybody uses virtualization technologies for GPFS Server ? If yes
> what kind and why have you selected your soulution.
> 
> I think currently about using Linux on Power using 40G SR-IOV for
> Network and NPIV/Dedidcated FC Adater for storage. As a plus i can
> also assign only a certain amount of CPUs to GPFS. ( Lower license
> cost / You pay for what you use)
> 
> 
> I must admit that i am not familar how "good" KVM/ESX in respect to
> direct assignment of hardware is. Thus the question to the group
> 

For the most part GPFS is used at scale and in general all the
components are redundant. As such why you would want to allocate less
than a whole server into a production GPFS system in somewhat beyond me.

That is you will have a bunch of NSD servers in the system and if one
crashes, well the other NSD's take over. Similar for protocol nodes, and
in general the total file system size is going to hundreds of TB
otherwise why bother with GPFS.

I guess there is currently potential value at sticking the GUI into a
virtual machine to get redundancy.

On the other hand if you want a test rig, then virtualization works
wonders. I have put GPFS on a single Linux box, using LV's for the disks
and mapping them into virtual machines under KVM.

JAB.

-- 
Jonathan A. Buzzard                 Email: jonathan (at) buzzard.me.uk
Fife, United Kingdom.

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170424/739d1b1b/attachment-0002.htm>


More information about the gpfsug-discuss mailing list