[gpfsug-discuss] maybe a silly question about "old school" gpfs

Laurence Horrocks- Barlow lhorrocks-barlow at ocf.co.uk
Wed Nov 5 10:47:06 GMT 2014


Hi Salvatore,

GSS and GPFS systems are different beasts.

In a traditional GPFS configuration I would expect any NSD server to 
write to any/all LUN's that it can see as a local disk providing it's 
part of the same FS.

In GSS there is effectively a software RAID level added on top of the 
disks, with this I would expect only the RG owner to write down to the 
vdisk.

As for corruption, GPFS uses a token system to manage access to LUN's, 
Metadata, etc.

Kind Regards,

Laurence Horrocks-Barlow
Linux Systems Software Engineer
OCF plc

Tel: +44 (0)114 257 2200
Fax: +44 (0)114 257 0022
Web: www.ocf.co.uk <http://www.ocf.co.uk>
Blog: blog.ocf.co.uk <http://blog.ocf.co.uk>
Twitter: @ocfplc <http://twitter.com/#%21/ocfplc>

OCF plc is a company registered in England and Wales. Registered number 
4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 
5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 
2PG.

This message is private and confidential. If you have received this 
message in error, please notify us and remove it from your system.

On 11/05/2014 10:33 AM, Salvatore Di Nardo wrote:
> I understand that my test its a bit particular because the client was 
> also one of the servers.
> Usually clients don't have direct access to the storages, but still it 
> made think, hot the things are supposed to work.
>
> For example i did another test with 3 dd's, one each server. All the 
> servers was writing to all the luns.
> In other words a lun was accessed in parallel by 3 servers.
>
> Its that a problem, or gpfs manage properly the concurrency and avoid 
> data corruption?
> I'm asking because i was not expecting a server to write to an NSD he 
> doesn't own, even if its locally available.
> I thought that the general availablity was for failover, not for 
> parallel access.
>
>
> Regards,
> Salvatore
>
>
>
> On 05/11/14 10:22, Vic Cornell wrote:
>> Hi Salvatore,
>>
>> If you are doing the IO on the NSD server itself and it can see all 
>> of the NSDs it will use its "local” access to write to the LUNS.
>>
>> You need some GPFS clients to see the workload spread across all of 
>> the NSD servers.
>>
>> Vic
>>
>>
>>
>>> On 5 Nov 2014, at 10:15, Salvatore Di Nardo <sdinardo at ebi.ac.uk 
>>> <mailto:sdinardo at ebi.ac.uk>> wrote:
>>>
>>> Hello again,
>>> to understand better GPFS, recently i build up an test gpfs cluster 
>>> using some old hardware that was going to be retired. THe storage 
>>> was SAN devices, so instead to use native raids I went for the old 
>>> school gpfs. the configuration is basically:
>>>
>>> 3x servers
>>> 3x san storages
>>> 2x san switches
>>>
>>> I did no zoning, so all the servers can see all the LUNs, but on nsd 
>>> creation I gave each LUN a primary, secondary and third server. with 
>>> the following rule:
>>>
>>> STORAGE
>>> 	primary
>>> 	secondary
>>> 	tertiary
>>> storage1
>>> 	server1
>>> 	server2 	server3
>>> storage2 	server2 	server3 	server1
>>> storage3 	server3 	server1 	server2
>>>
>>>
>>>
>>> looking at the mmcrnsd, it was my understanding that the primary 
>>> server is the one that wrote on the NSD unless it fails, then the 
>>> following server take the ownership of the lun.
>>>
>>> Now come the question:
>>> when i did from server 1 a dd surprisingly i discovered that server1 
>>> was writing to all the luns. the other 2 server was doing nothing. 
>>> this behaviour surprises me because on GSS only the RG owner can 
>>> write, so one server "ask" the other server to write to his own 
>>> RG's.In fact on GSS can be seen a lot of ETH traffic and io/s on 
>>> each server. While i understand that the situation it's different 
>>> I'm puzzled about the fact that all the servers seems able to write 
>>> to all the luns.
>>>
>>> SAN deviced usually should be connected to one server only, as 
>>> paralled access could create data corruption. In environments where 
>>> you connect a SAN to multiple servers ( example VMWARE cloud) its 
>>> softeware task to avoid data overwriting between server ( and data 
>>> corruption ).
>>>
>>> Honestly, what  i was expecting is: server1 writing on his own luns, 
>>> and data traffic ( ethernet) to the other 2 server , basically 
>>> asking *them* to write on the other luns. I dont know if this 
>>> behaviour its normal or not. I triied to find a documentation about 
>>> that, but could not find any.
>>>
>>> Could somebody  tell me if this _/"every server write to all the 
>>> luns"/_ its intended or not?
>>>
>>> Thanks in advance,
>>> Salvatore
>>> _______________________________________________
>>> gpfsug-discuss mailing list
>>> gpfsug-discuss at gpfsug.org <http://gpfsug.org>
>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>>
>>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at gpfsug.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20141105/aec0d4a5/attachment-0003.htm>


More information about the gpfsug-discuss mailing list