[gpfsug-discuss] maybe a silly question about "old school" gpfs

Kalyan Gunda kgunda at in.ibm.com
Wed Nov 5 10:25:07 GMT 2014


In case of SAN connectivity, all nodes can write to disks.  This avoids
going over the network to get to disks.
Only when local access isn't present either due to connectivity or zoning
will it use the defined NSD server.

If there is a need to have the node always use a NSD server, you can
enforce it via mount option -o usensdserver=always
If the first nsd server is down, it will use the next NSD server in the
list.  In general NSD servers are a priority list of servers rather than a
primary/secondary config which is the case when using native raid.

Also note that multiple nodes accessing the same disk will not cause
corruption as higher level token mgmt in GPFS will take care of data
consistency.
Regards
Kalyan C Gunda
STSM, Elastic Storage Development
Member of The IBM Academy of Technology
EGL D Block, Bangalore




From:	Salvatore Di Nardo <sdinardo at ebi.ac.uk>
To:	gpfsug main discussion list <gpfsug-discuss at gpfsug.org>
Date:	11/05/2014 03:44 PM
Subject:	[gpfsug-discuss] maybe a silly question about "old school" gpfs
Sent by:	gpfsug-discuss-bounces at gpfsug.org



Hello again,
to understand better GPFS, recently i build up an test gpfs cluster using
some old hardware that was going to be retired. THe storage was SAN
devices, so instead to use native raids I went for the old school gpfs. the
configuration is basically:

3x servers
3x san storages
2x san switches

I did no zoning, so all the servers can see all the LUNs, but on nsd
creation I gave each LUN a primary, secondary and third server. with the
following rule:
|-------------------+---------------+--------------------+---------------|
|STORAGE            |primary        |secondary           |tertiary       |
|-------------------+---------------+--------------------+---------------|
|storage1           |server1        |server2             |server3        |
|-------------------+---------------+--------------------+---------------|
|storage2           |server2        |server3             |server1        |
|-------------------+---------------+--------------------+---------------|
|storage3           |server3        |server1             |server2        |
|-------------------+---------------+--------------------+---------------|




looking at the mmcrnsd, it was my understanding that the primary server is
the one that wrote on the NSD unless it fails, then the following server
take the ownership of the lun.

Now come the question:
when i did from server 1 a dd surprisingly i discovered that server1 was
writing to all the luns. the other 2 server was doing nothing. this
behaviour surprises me because on GSS only the RG owner can write, so one
server "ask" the other server to write to his own RG's.In fact on GSS can
be seen a lot of ETH traffic and io/s on each server. While i understand
that the situation it's different I'm puzzled about the fact that all the
servers seems able to write to all the luns.

SAN deviced usually should be connected to one server only, as paralled
access could create data corruption. In environments where you connect a
SAN to multiple servers ( example VMWARE cloud) its softeware task to avoid
data overwriting between server ( and data corruption ).

Honestly, what  i was expecting is: server1 writing on his own luns, and
data traffic ( ethernet) to the other 2 server , basically asking them to
write on the other luns. I dont know if this behaviour its normal or not. I
triied to find a documentation about that, but could not find any.

Could somebody  tell me if this "every server write to all the luns" its
intended or not?

Thanks in advance,
Salvatore_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss





More information about the gpfsug-discuss mailing list