[gpfsug-discuss] maybe a silly question about "old school" gpfs

Stijn De Weirdt stijn.deweirdt at ugent.be
Wed Nov 5 10:25:07 GMT 2014


yes, this behaviour is normal, and a bit annoying sometimes, but GPFS 
doesn't really like (or isn't designed) to run stuff on the NSDs 
directly. the GSS probably send the data to the other NSD to distribute 
the (possible) compute cost from the raid, where there is none for 
regular LUN access. (but you also shouldn't be running on the GSS NSDs ;)

stijn

On 11/05/2014 11:15 AM, Salvatore Di Nardo wrote:
> Hello again,
> to understand better GPFS, recently i build up an test gpfs cluster
> using some old hardware that was going to be retired. THe storage was
> SAN devices, so instead to use native raids I went for the old school
> gpfs. the configuration is basically:
>
> 3x servers
> 3x san storages
> 2x san switches
>
> I did no zoning, so all the servers can see all the LUNs, but on nsd
> creation I gave each LUN a primary, secondary and third server. with the
> following rule:
>
> STORAGE
>      primary
>      secondary
>      tertiary
> storage1
>      server1
>      server2     server3
> storage2     server2     server3     server1
> storage3     server3     server1     server2
>
>
>
> looking at the mmcrnsd, it was my understanding that the primary server
> is the one that wrote on the NSD unless it fails, then the following
> server take the ownership of the lun.
>
> Now come the question:
> when i did from server 1 a dd surprisingly i discovered that server1 was
> writing to all the luns. the other 2 server was doing nothing. this
> behaviour surprises me because on GSS only the RG owner can write, so
> one server "ask" the other server to write to his own RG's.In fact on
> GSS can be seen a lot of ETH traffic and io/s on each server. While i
> understand that the situation it's different I'm puzzled about the fact
> that all the servers seems able to write to all the luns.
>
> SAN deviced usually should be connected to one server only, as paralled
> access could create data corruption. In environments where you connect a
> SAN to multiple servers ( example VMWARE cloud) its softeware task to
> avoid data overwriting between server ( and data corruption ).
>
> Honestly, what  i was expecting is: server1 writing on his own luns, and
> data traffic ( ethernet) to the other 2 server , basically asking *them*
> to write on the other luns. I dont know if this behaviour its normal or
> not. I triied to find a documentation about that, but could not find any.
>
> Could somebody  tell me if this _/"every server write to all the luns"/_
> its intended or not?
>
> Thanks in advance,
> Salvatore
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>



More information about the gpfsug-discuss mailing list