<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class=""><br class=""></div><div class=""><br class=""></div><br class=""><div><blockquote type="cite" class=""><div class="">On 5 Nov 2014, at 10:33, Salvatore Di Nardo <<a href="mailto:sdinardo@ebi.ac.uk" class="">sdinardo@ebi.ac.uk</a>> wrote:</div><br class="Apple-interchange-newline"><div class="">
  
    <meta content="text/html; charset=windows-1252" http-equiv="Content-Type" class="">
  
  <div text="#000000" bgcolor="#FFFFFF" class="">
    I understand that my test its a bit particular because the client
    was also one of the servers.<br class="">
    Usually clients don't have direct access to the storages, but still
    it made think, hot the things are supposed to work.<br class="">
    <br class="">
    For example i did another test with 3 dd's, one each server. All the
    servers was writing to all the luns. <br class="">
    In other words a lun was accessed in parallel by 3 servers. <br class="">
    <br class="">
    Its that a problem, or gpfs manage properly the concurrency and
    avoid data corruption? <br class=""></div></div></blockquote><div><br class=""></div><div><div class="">Its not a problem if you use locks. Remember the clients - even the ones running on the NSD servers are talking to the filesystem - not to the LUNS/NSDs directly.</div><div class=""><br class=""></div><div class="">It is the NSD processes that talk to the NSDs.</div><div class=""><br class=""></div><div class="">So loosely speaking it is as if all of the processes you are running were running on a single system with a local filesystem</div><div class=""><br class=""></div><div class="">So yes - gpfs is designed to manage the problems created by having a distributed, shared filesystem, and does a pretty good job IMHO.</div></div><div><br class=""></div><br class=""><blockquote type="cite" class=""><div class=""><div text="#000000" bgcolor="#FFFFFF" class="">
    I'm asking because i was not expecting a server to write to an NSD
    he doesn't own, even if its locally available. <br class="">
    I thought that the general availablity was for failover, not for
    parallel access.<br class=""></div></div></blockquote><div><br class=""></div><div>Bear in mind that GPFS supports a number of access models, one of which is where all of the systems in the cluster have access to all of the disks.</div><div><br class=""></div><div>So parallel access  is most commonly used for failover, but that is not the limit of its capabilities.</div><div><br class=""></div><div>Vic</div><div><br class=""></div><div><br class=""></div><blockquote type="cite" class=""><div class=""><div text="#000000" bgcolor="#FFFFFF" class="">
    <br class="">
    <br class="">
    Regards,<br class="">
    Salvatore<br class="">
    <br class="">
    <br class="">
    <br class="">
    <div class="moz-cite-prefix">On 05/11/14 10:22, Vic Cornell wrote:<br class="">
    </div>
    <blockquote cite="mid:3F74C441-C25D-4F19-AD05-04AD897A08D3@gmail.com" type="cite" class="">
      <meta http-equiv="Content-Type" content="text/html;
        charset=windows-1252" class="">
      Hi Salvatore,
      <div class=""><br class="">
      </div>
      <div class="">If you are doing the IO on the NSD server itself and
        it can see all of the NSDs it will use its "local” access to
        write to the LUNS.</div>
      <div class=""><br class="">
      </div>
      <div class="">You need some GPFS clients to see the workload
        spread across all of the NSD servers.</div>
      <div class=""><br class="">
      </div>
      <div class="">Vic</div>
      <div class=""><br class="">
        <div class=""><br class="">
        </div>
        <div class=""><br class="">
          <div class="">
            <blockquote type="cite" class="">
              <div class="">On 5 Nov 2014, at 10:15, Salvatore Di Nardo
                <<a moz-do-not-send="true" href="mailto:sdinardo@ebi.ac.uk" class="">sdinardo@ebi.ac.uk</a>>
                wrote:</div>
              <br class="Apple-interchange-newline">
              <div class="">
                <meta http-equiv="content-type" content="text/html;
                  charset=windows-1252" class="">
                <div text="#000000" bgcolor="#FFFFFF" class=""> <font class="" size="-1">Hello again,<br class="">
                    to understand better GPFS, recently i build up an
                    test gpfs cluster using some old hardware that was
                    going to be retired. THe storage was SAN devices, so
                    instead to use native raids I went for the old
                    school gpfs. the configuration is basically:<br class="">
                    <br class="">
                    3x servers <br class="">
                    3x san storages<br class="">
                    2x san switches<br class="">
                    <br class="">
                    I did no zoning, so all the servers can see all the
                    LUNs, but on nsd creation I gave each LUN a primary,
                    secondary and third server. with the following rule:<br class="">
                    <br class="">
                  </font>
                  <table class="" width="80%" border="1" cellpadding="2" cellspacing="2">
                    <tbody class="">
                      <tr class="">
                        <td class="" valign="top"><font class="" size="-1">STORAGE</font><br class="">
                        </td>
                        <td class="" valign="top">primary<br class="">
                        </td>
                        <td class="" valign="top">secondary<br class="">
                        </td>
                        <td class="" valign="top">tertiary<br class="">
                        </td>
                      </tr>
                      <tr class="">
                        <td class="" valign="top">storage1<br class="">
                        </td>
                        <td class="" valign="top">server1<br class="">
                        </td>
                        <td class="" valign="top">server2</td>
                        <td class="" valign="top">server3</td>
                      </tr>
                      <tr class="">
                        <td class="" valign="top">storage2</td>
                        <td class="" valign="top">server2</td>
                        <td class="" valign="top">server3</td>
                        <td class="" valign="top">server1</td>
                      </tr>
                      <tr class="">
                        <td class="" valign="top">storage3</td>
                        <td class="" valign="top">server3</td>
                        <td class="" valign="top">server1</td>
                        <td class="" valign="top">server2</td>
                      </tr>
                    </tbody>
                  </table>
                  <font class="" size="-1"><br class="">
                    <br class="">
                    looking at the mmcrnsd, it was my understanding that
                    the primary server is the one that wrote on the NSD
                    unless it fails, then the following server take the
                    ownership of the lun.<br class="">
                    <br class="">
                    Now come the question: <br class="">
                    when i did from server 1 a dd surprisingly i
                    discovered that server1 was writing to all the luns.
                    the other 2 server was doing nothing. this behaviour
                    surprises me because on GSS only the RG owner can
                    write, so one server "ask" the other server to write
                    to his own RG's.In fact on GSS can be seen a lot of
                    ETH traffic and io/s on each server. While i
                    understand that the situation it's different I'm
                    puzzled about the fact that all the servers seems
                    able to write to all the luns. <br class="">
                    <br class="">
                    SAN deviced usually should be connected to one
                    server only, as paralled access could create data
                    corruption. In environments where you connect a SAN
                    to multiple servers ( example VMWARE cloud) its
                    softeware task to avoid data overwriting between
                    server ( and data corruption ).<br class="">
                    <br class="">
                    Honestly, what  i was expecting is: server1 writing
                    on his own luns, and data traffic ( ethernet) to the
                    other 2 server , basically asking <b class="">them</b>
                    to write on the other luns. I dont know if this
                    behaviour its normal or not. I triied to find a
                    documentation about that, but could not find any.<br class="">
                    <br class="">
                    Could somebody  tell me if this <u class=""><i class="">"every server write to all the luns"</i></u>
                    its intended or not?<br class="">
                    <br class="">
                    Thanks in advance,<br class="">
                    Salvatore<br class="">
                  </font> </div>
                _______________________________________________<br class="">
                gpfsug-discuss mailing list<br class="">
                gpfsug-discuss at <a moz-do-not-send="true" href="http://gpfsug.org/" class="">gpfsug.org</a><br class="">
                <a moz-do-not-send="true" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" class="">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br class="">
              </div>
            </blockquote>
          </div>
          <br class="">
        </div>
      </div>
      <br class="">
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br class="">
      <pre wrap="" class="">_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at <a href="http://gpfsug.org" class="">gpfsug.org</a>
<a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
</pre>
    </blockquote>
    <br class="">
  </div>

_______________________________________________<br class="">gpfsug-discuss mailing list<br class="">gpfsug-discuss at <a href="http://gpfsug.org" class="">gpfsug.org</a><br class=""><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" class="">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br class=""></div></blockquote></div><br class=""></body></html>