<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
I understand that my test its a bit particular because the client
was also one of the servers.<br>
Usually clients don't have direct access to the storages, but still
it made think, hot the things are supposed to work.<br>
<br>
For example i did another test with 3 dd's, one each server. All the
servers was writing to all the luns. <br>
In other words a lun was accessed in parallel by 3 servers. <br>
<br>
Its that a problem, or gpfs manage properly the concurrency and
avoid data corruption? <br>
I'm asking because i was not expecting a server to write to an NSD
he doesn't own, even if its locally available. <br>
I thought that the general availablity was for failover, not for
parallel access.<br>
<br>
<br>
Regards,<br>
Salvatore<br>
<br>
<br>
<br>
<div class="moz-cite-prefix">On 05/11/14 10:22, Vic Cornell wrote:<br>
</div>
<blockquote
cite="mid:3F74C441-C25D-4F19-AD05-04AD897A08D3@gmail.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
Hi Salvatore,
<div class=""><br class="">
</div>
<div class="">If you are doing the IO on the NSD server itself and
it can see all of the NSDs it will use its "local” access to
write to the LUNS.</div>
<div class=""><br class="">
</div>
<div class="">You need some GPFS clients to see the workload
spread across all of the NSD servers.</div>
<div class=""><br class="">
</div>
<div class="">Vic</div>
<div class=""><br class="">
<div class=""><br class="">
</div>
<div class=""><br class="">
<div>
<blockquote type="cite" class="">
<div class="">On 5 Nov 2014, at 10:15, Salvatore Di Nardo
<<a moz-do-not-send="true"
href="mailto:sdinardo@ebi.ac.uk" class="">sdinardo@ebi.ac.uk</a>>
wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<meta http-equiv="content-type" content="text/html;
charset=windows-1252" class="">
<div text="#000000" bgcolor="#FFFFFF" class=""> <font
class="" size="-1">Hello again,<br class="">
to understand better GPFS, recently i build up an
test gpfs cluster using some old hardware that was
going to be retired. THe storage was SAN devices, so
instead to use native raids I went for the old
school gpfs. the configuration is basically:<br
class="">
<br class="">
3x servers <br class="">
3x san storages<br class="">
2x san switches<br class="">
<br class="">
I did no zoning, so all the servers can see all the
LUNs, but on nsd creation I gave each LUN a primary,
secondary and third server. with the following rule:<br
class="">
<br class="">
</font>
<table class="" width="80%" border="1" cellpadding="2"
cellspacing="2">
<tbody class="">
<tr class="">
<td class="" valign="top"><font class=""
size="-1">STORAGE</font><br class="">
</td>
<td class="" valign="top">primary<br class="">
</td>
<td class="" valign="top">secondary<br class="">
</td>
<td class="" valign="top">tertiary<br class="">
</td>
</tr>
<tr class="">
<td class="" valign="top">storage1<br class="">
</td>
<td class="" valign="top">server1<br class="">
</td>
<td class="" valign="top">server2</td>
<td class="" valign="top">server3</td>
</tr>
<tr class="">
<td class="" valign="top">storage2</td>
<td class="" valign="top">server2</td>
<td class="" valign="top">server3</td>
<td class="" valign="top">server1</td>
</tr>
<tr class="">
<td class="" valign="top">storage3</td>
<td class="" valign="top">server3</td>
<td class="" valign="top">server1</td>
<td class="" valign="top">server2</td>
</tr>
</tbody>
</table>
<font class="" size="-1"><br class="">
<br class="">
looking at the mmcrnsd, it was my understanding that
the primary server is the one that wrote on the NSD
unless it fails, then the following server take the
ownership of the lun.<br class="">
<br class="">
Now come the question: <br class="">
when i did from server 1 a dd surprisingly i
discovered that server1 was writing to all the luns.
the other 2 server was doing nothing. this behaviour
surprises me because on GSS only the RG owner can
write, so one server "ask" the other server to write
to his own RG's.In fact on GSS can be seen a lot of
ETH traffic and io/s on each server. While i
understand that the situation it's different I'm
puzzled about the fact that all the servers seems
able to write to all the luns. <br class="">
<br class="">
SAN deviced usually should be connected to one
server only, as paralled access could create data
corruption. In environments where you connect a SAN
to multiple servers ( example VMWARE cloud) its
softeware task to avoid data overwriting between
server ( and data corruption ).<br class="">
<br class="">
Honestly, what i was expecting is: server1 writing
on his own luns, and data traffic ( ethernet) to the
other 2 server , basically asking <b class="">them</b>
to write on the other luns. I dont know if this
behaviour its normal or not. I triied to find a
documentation about that, but could not find any.<br
class="">
<br class="">
Could somebody tell me if this <u class=""><i
class="">"every server write to all the luns"</i></u>
its intended or not?<br class="">
<br class="">
Thanks in advance,<br class="">
Salvatore<br class="">
</font> </div>
_______________________________________________<br
class="">
gpfsug-discuss mailing list<br class="">
gpfsug-discuss at <a moz-do-not-send="true"
href="http://gpfsug.org" class="">gpfsug.org</a><br
class="">
<a moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"
class="">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br
class="">
</div>
</blockquote>
</div>
<br class="">
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
<a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
</pre>
</blockquote>
<br>
</body>
</html>