<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Hi<br>
    <br>
      When mmbackup has passed the preflight stage (pretty quickly)
    you'll find the autogenerated ruleset as
    /var/mmfs/mmbackup/.mmbackupRules*<br>
    <br>
    Best,<br>
    <br>
    Jez<br>
    <br>
    <br>
    <div class="moz-cite-prefix">On 18/05/17 20:02, Jaime Pinto wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:20170518150246.890675i0fcqdnumu@support.scinet.utoronto.ca">Ok
      Mark
      <br>
      <br>
      I'll follow your option 2) suggestion, and capture what mmbackup
      is using as a rule first, then modify it.
      <br>
      <br>
      I imagine by 'capture' you are referring to the -L n level I use?
      <br>
      <br>
      -L n
      <br>
               Controls the level of information displayed by the
      <br>
               mmbackup command. Larger values indicate the
      <br>
               display of more detailed information. n should be one of
      <br>
               the following values:
      <br>
      <br>
               3
      <br>
                        Displays the same information as 2, plus each
      <br>
                        candidate file and the applicable rule.
      <br>
      <br>
               4
      <br>
                        Displays the same information as 3, plus each
      <br>
                        explicitly EXCLUDEed or LISTed
      <br>
                        file, and the applicable rule.
      <br>
      <br>
               5
      <br>
                        Displays the same information as 4, plus the
      <br>
                        attributes of candidate and EXCLUDEed or
      <br>
                        LISTed files.
      <br>
      <br>
               6
      <br>
                        Displays the same information as 5, plus
      <br>
                        non-candidate files and their attributes.
      <br>
      <br>
      Thanks
      <br>
      Jaime
      <br>
      <br>
      <br>
      <br>
      <br>
      Quoting "Marc A Kaplan" <a class="moz-txt-link-rfc2396E" href="mailto:makaplan@us.ibm.com"><makaplan@us.ibm.com></a>:
      <br>
      <br>
      <blockquote type="cite">1. As I surmised, and I now have
        verification from Mr. mmbackup, mmbackup
        <br>
        wants to support incremental backups (using what it calls its
        shadow
        <br>
        database) and keep both your sanity and its sanity -- so
        mmbackup limits
        <br>
        you to either full filesystem or full inode-space (independent
        fileset.)
        <br>
        If you want to do something else, okay, but you have to be
        careful and be
        <br>
        sure of yourself. IBM will not be able to jump in and help you
        if and when
        <br>
        it comes time to restore and you discover that your backup(s)
        were not
        <br>
        complete.
        <br>
        <br>
        2. If you decide you're a big boy (or woman or XXX) and want to
        do some
        <br>
        hacking ...  Fine... But even then, I suggest you do the
        smallest hack
        <br>
        that will mostly achieve your goal...
        <br>
        DO NOT think you can create a custom policy rules list for
        mmbackup out of
        <br>
        thin air....  Capture the rules mmbackup creates and make small
        changes to
        <br>
        that --
        <br>
        And as with any disaster recovery plan.....   Plan your Test and
        Test your
        <br>
        Plan....  Then do some dry run recoveries before you really
        "need" to do a
        <br>
        real recovery.
        <br>
        <br>
        I only even sugest this because Jaime says he has a huge
        filesystem with
        <br>
        several dependent filesets and he really, really wants to do a
        partial
        <br>
        backup, without first copying or re-organizing the filesets.
        <br>
        <br>
        HMMM.... otoh... if you have one or more dependent filesets that
        are
        <br>
        smallish, and/or you don't need the backups -- create
        independent
        <br>
        filesets, copy/move/delete the data, rename, voila.
        <br>
        <br>
        <br>
        <br>
        From:   "Jaime Pinto" <a class="moz-txt-link-rfc2396E" href="mailto:pinto@scinet.utoronto.ca"><pinto@scinet.utoronto.ca></a>
        <br>
        To:     "Marc A Kaplan" <a class="moz-txt-link-rfc2396E" href="mailto:makaplan@us.ibm.com"><makaplan@us.ibm.com></a>
        <br>
        Cc:     "gpfsug main discussion list"
        <a class="moz-txt-link-rfc2396E" href="mailto:gpfsug-discuss@spectrumscale.org"><gpfsug-discuss@spectrumscale.org></a>
        <br>
        Date:   05/18/2017 12:36 PM
        <br>
        Subject:        Re: [gpfsug-discuss] What is an independent
        fileset? was:
        <br>
        mmbackup        with fileset : scope errors
        <br>
        <br>
        <br>
        <br>
        Marc
        <br>
        <br>
        The -P option may be a very good workaround, but I still have to
        test it.
        <br>
        <br>
        I'm currently trying to craft the mm rule, as minimalist as
        possible,
        <br>
        however I'm not sure about what attributes mmbackup expects to
        see.
        <br>
        <br>
        Below is my first attempt. It would be nice to get comments from
        <br>
        somebody familiar with the inner works of mmbackup.
        <br>
        <br>
        Thanks
        <br>
        Jaime
        <br>
        <br>
        <br>
        /* A macro to abbreviate VARCHAR */
        <br>
        define([vc],[VARCHAR($1)])
        <br>
        <br>
        /* Define three external lists */
        <br>
        RULE EXTERNAL LIST 'allfiles' EXEC
        <br>
        '/scratch/r/root/mmpolicyRules/mmpolicyExec-list'
        <br>
        <br>
        /* Generate a list of all files, directories, plus all other
        file
        <br>
        system objects,
        <br>
            like symlinks, named pipes, etc. Include the owner's id with
        each
        <br>
        object and
        <br>
            sort them by the owner's id */
        <br>
        <br>
        RULE 'r1' LIST 'allfiles'
        <br>
                 DIRECTORIES_PLUS
        <br>
                 SHOW('-u' vc(USER_ID) || ' -a' || vc(ACCESS_TIME) || '
        -m' ||
        <br>
        vc(MODIFICATION_TIME) || ' -s ' || vc(FILE_SIZE))
        <br>
                 FROM POOL 'system'
        <br>
                 FOR FILESET('sysadmin3')
        <br>
        <br>
        /* Files in special filesets, such as those excluded, are never
        traversed
        <br>
        */
        <br>
        RULE 'ExcSpecialFile' EXCLUDE
        <br>
                 FOR FILESET('scratch3','project3')
        <br>
        <br>
        <br>
        <br>
        <br>
        <br>
        Quoting "Marc A Kaplan" <a class="moz-txt-link-rfc2396E" href="mailto:makaplan@us.ibm.com"><makaplan@us.ibm.com></a>:
        <br>
        <br>
        <blockquote type="cite">Jaime,
          <br>
          <br>
            While we're waiting for the mmbackup expert to weigh in,
          notice that
          <br>
        </blockquote>
        the
        <br>
        <blockquote type="cite">mmbackup command does have a -P option
          that allows you to provide a
          <br>
          customized policy rules file.
          <br>
          <br>
          So... a fairly safe hack is to do a trial mmbackup run,
          capture the
          <br>
          automatically generated policy file, and then augment it with
          FOR
          <br>
          FILESET('fileset-I-want-to-backup') clauses.... Then run the
          mmbackup
          <br>
        </blockquote>
        for
        <br>
        <blockquote type="cite">real with your customized policy file.
          <br>
          <br>
          mmbackup uses mmapplypolicy which by itself is happy to limit
          its
          <br>
          directory scan to a particular fileset by using
          <br>
          <br>
          mmapplypolicy /path-to-any-directory-within-a-gpfs-filesystem
          --scope
          <br>
          fileset ....
          <br>
          <br>
          However, mmbackup probably has other worries and for
          simpliciity and
          <br>
          helping make sure you get complete, sensible backups,
          apparently has
          <br>
          imposed some restrictions to preserve sanity (yours and our
          support
          <br>
        </blockquote>
        team!
        <br>
        <blockquote type="cite">;-) )  ...   (For example, suppose you
          were doing incremental backups,
          <br>
          starting at different paths each time? -- happy to do so, but
          when
          <br>
          disaster strikes and you want to restore -- you'll end up
          confused
          <br>
        </blockquote>
        and/or
        <br>
        <blockquote type="cite">unhappy!)
          <br>
          <br>
          "converting from one fileset to another" --- sorry there is no
          such
          <br>
        </blockquote>
        thing.
        <br>
        <blockquote type="cite"> Filesets are kinda like little
          filesystems within filesystems.  Moving
          <br>
        </blockquote>
        a
        <br>
        <blockquote type="cite">file from one fileset to another
          requires a copy operation.   There is
          <br>
        </blockquote>
        no
        <br>
        <blockquote type="cite">fast move nor hardlinking.
          <br>
          <br>
          --marc
          <br>
          <br>
          <br>
          <br>
          From:   "Jaime Pinto" <a class="moz-txt-link-rfc2396E" href="mailto:pinto@scinet.utoronto.ca"><pinto@scinet.utoronto.ca></a>
          <br>
          To:     "gpfsug main discussion list"
          <br>
        </blockquote>
        <a class="moz-txt-link-rfc2396E" href="mailto:gpfsug-discuss@spectrumscale.org"><gpfsug-discuss@spectrumscale.org></a>,
        <br>
        <blockquote type="cite">"Marc   A Kaplan"
          <a class="moz-txt-link-rfc2396E" href="mailto:makaplan@us.ibm.com"><makaplan@us.ibm.com></a>
          <br>
          Date:   05/18/2017 09:58 AM
          <br>
          Subject:        Re: [gpfsug-discuss] What is an independent
          fileset?
          <br>
        </blockquote>
        was:
        <br>
        <blockquote type="cite">mmbackup        with fileset : scope
          errors
          <br>
          <br>
          <br>
          <br>
          Thanks for the explanation Mark and Luis,
          <br>
          <br>
          It begs the question: why filesets are created as dependent by
          <br>
          default, if the adverse repercussions can be so great
          afterward? Even
          <br>
          in my case, where I manage GPFS and TSM deployments (and I
          have been
          <br>
          around for a while), didn't realize at all that not adding and
          extra
          <br>
          option at fileset creation time would cause me huge trouble
          with
          <br>
          scaling later on as I try to use mmbackup.
          <br>
          <br>
          When you have different groups to manage file systems and
          backups that
          <br>
          don't read each-other's manuals ahead of time then we have a
          really
          <br>
          bad recipe.
          <br>
          <br>
          I'm looking forward to your explanation as to why mmbackup
          cares one
          <br>
          way or another.
          <br>
          <br>
          I'm also hoping for a hint as to how to configure backup
          exclusion
          <br>
          rules on the TSM side to exclude fileset traversing on the
          GPFS side.
          <br>
          Is mmbackup smart enough (actually smarter than TSM client
          itself) to
          <br>
          read the exclusion rules on the TSM configuration and apply
          them
          <br>
          before traversing?
          <br>
          <br>
          Thanks
          <br>
          Jaime
          <br>
          <br>
          Quoting "Marc A Kaplan" <a class="moz-txt-link-rfc2396E" href="mailto:makaplan@us.ibm.com"><makaplan@us.ibm.com></a>:
          <br>
          <br>
          <blockquote type="cite">When I see "independent fileset" (in
            Spectrum/GPFS/Scale)  I always
            <br>
          </blockquote>
          think
          <br>
          <blockquote type="cite">and try to read that as "inode space".
            <br>
            <br>
            An "independent fileset" has all the attributes of an
            (older-fashioned)
            <br>
            dependent fileset PLUS all of its files are represented by
            inodes that
            <br>
          </blockquote>
          are
          <br>
          <blockquote type="cite">in a separable range of inode numbers
            - this allows GPFS to efficiently
            <br>
          </blockquote>
          do
          <br>
          <blockquote type="cite">snapshots of just that inode-space
            (uh... independent fileset)...
            <br>
            <br>
            And... of course the files of dependent filesets must also
            be
            <br>
          </blockquote>
          represented
          <br>
          <blockquote type="cite">by inodes -- those inode numbers are
            within the inode-space of whatever
            <br>
            the containing independent fileset is... as was chosen when
            you created
            <br>
            the fileset....   If you didn't say otherwise, inodes come
            from the
            <br>
            default "root" fileset....
            <br>
            <br>
            Clear as your bath-water, no?
            <br>
            <br>
            So why does mmbackup care one way or another ???   Stay
            tuned....
            <br>
            <br>
            BTW - if you look at the bits of the inode numbers carefully
            --- you
            <br>
          </blockquote>
        </blockquote>
        may
        <br>
        <blockquote type="cite">
          <blockquote type="cite">not immediately discern what I mean by
            a "separable range of inode
            <br>
            numbers" -- (very technical hint) you may need to permute
            the bit order
            <br>
            before you discern a simple pattern...
            <br>
            <br>
            <br>
            <br>
            From:   "Luis Bolinches" <a class="moz-txt-link-rfc2396E" href="mailto:luis.bolinches@fi.ibm.com"><luis.bolinches@fi.ibm.com></a>
            <br>
            To:     <a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss@spectrumscale.org">gpfsug-discuss@spectrumscale.org</a>
            <br>
            Cc:     <a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss@spectrumscale.org">gpfsug-discuss@spectrumscale.org</a>
            <br>
            Date:   05/18/2017 02:10 AM
            <br>
            Subject:        Re: [gpfsug-discuss] mmbackup with fileset :
            scope
            <br>
          </blockquote>
          errors
          <br>
          <blockquote type="cite">Sent by:       
            <a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss-bounces@spectrumscale.org">gpfsug-discuss-bounces@spectrumscale.org</a>
            <br>
            <br>
            <br>
            <br>
            Hi
            <br>
            <br>
            There is no direct way to convert the one fileset that is
            dependent to
            <br>
            independent or viceversa.
            <br>
            <br>
            I would suggest to take a look to chapter 5 of the 2014
            redbook, lots
            <br>
          </blockquote>
        </blockquote>
        of
        <br>
        <blockquote type="cite">
          <blockquote type="cite">definitions about GPFS ILM including
            filesets
            <br>
            <a class="moz-txt-link-freetext" href="http://www.redbooks.ibm.com/abstracts/sg248254.html?Open">http://www.redbooks.ibm.com/abstracts/sg248254.html?Open</a> Is
            not the
            <br>
          </blockquote>
        </blockquote>
        only
        <br>
        <blockquote type="cite">
          <blockquote type="cite">place that is explained but I honestly
            believe is a good single start
            <br>
            point. It also needs an update as does nto have anything on
            CES nor
            <br>
          </blockquote>
        </blockquote>
        ESS,
        <br>
        <blockquote type="cite">
          <blockquote type="cite">so anyone in this list feel free to
            give feedback on that page people
            <br>
          </blockquote>
          with
          <br>
          <blockquote type="cite">funding decisions listen there.
            <br>
            <br>
            So you are limited to either migrate the data from that
            fileset to a
            <br>
          </blockquote>
        </blockquote>
        new
        <br>
        <blockquote type="cite">
          <blockquote type="cite">independent fileset (multiple ways to
            do that) or use the TSM client
            <br>
            config.
            <br>
            <br>
            ----- Original message -----
            <br>
            From: "Jaime Pinto" <a class="moz-txt-link-rfc2396E" href="mailto:pinto@scinet.utoronto.ca"><pinto@scinet.utoronto.ca></a>
            <br>
            Sent by: <a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss-bounces@spectrumscale.org">gpfsug-discuss-bounces@spectrumscale.org</a>
            <br>
            To: "gpfsug main discussion list"
            <a class="moz-txt-link-rfc2396E" href="mailto:gpfsug-discuss@spectrumscale.org"><gpfsug-discuss@spectrumscale.org></a>,
            <br>
            "Jaime Pinto" <a class="moz-txt-link-rfc2396E" href="mailto:pinto@scinet.utoronto.ca"><pinto@scinet.utoronto.ca></a>
            <br>
            Cc:
            <br>
            Subject: Re: [gpfsug-discuss] mmbackup with fileset : scope
            errors
            <br>
            Date: Thu, May 18, 2017 4:43 AM
            <br>
            <br>
            There is hope. See reference link below:
            <br>
            <br>
          </blockquote>
          <br>
        </blockquote>
<a class="moz-txt-link-freetext" href="https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.1.1/com.ibm.spectrum.scale.v4r11.ins.doc/bl1ins_tsm_fsvsfset.htm">https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.1.1/com.ibm.spectrum.scale.v4r11.ins.doc/bl1ins_tsm_fsvsfset.htm</a>
        <br>
        <br>
        <blockquote type="cite">
          <br>
          <blockquote type="cite">
            <br>
            <br>
            The issue has to do with dependent vs. independent filesets,
            something
            <br>
            I didn't even realize existed until now. Our filesets are
            dependent
            <br>
            (for no particular reason), so I have to find a way to turn
            them into
            <br>
            independent.
            <br>
            <br>
            The proper option syntax is "--scope inodespace", and the
            error
            <br>
            message actually flagged that out, however I didn't know how
            to
            <br>
            interpret what I saw:
            <br>
            <br>
            <br>
            # mmbackup /gpfs/sgfs1/sysadmin3 -N tsm-helper1-ib0 -s
            /dev/shm
            <br>
            --scope inodespace --tsm-errorlog $logfile -L 2
            <br>
            --------------------------------------------------------
            <br>
            mmbackup: Backup of /gpfs/sgfs1/sysadmin3 begins at Wed May
            17
            <br>
            21:27:43 EDT 2017.
            <br>
            --------------------------------------------------------
            <br>
            Wed May 17 21:27:45 2017 mmbackup:mmbackup: Backing up
            *dependent*
            <br>
            fileset sysadmin3 is not supported
            <br>
            Wed May 17 21:27:45 2017 mmbackup:This fileset is not
            suitable for
            <br>
            fileset level backup.  exit 1
            <br>
            --------------------------------------------------------
            <br>
            <br>
            Will post the outcome.
            <br>
            Jaime
            <br>
            <br>
            <br>
            <br>
            Quoting "Jaime Pinto" <a class="moz-txt-link-rfc2396E" href="mailto:pinto@scinet.utoronto.ca"><pinto@scinet.utoronto.ca></a>:
            <br>
            <br>
            <blockquote type="cite">Quoting "Luis Bolinches"
              <a class="moz-txt-link-rfc2396E" href="mailto:luis.bolinches@fi.ibm.com"><luis.bolinches@fi.ibm.com></a>:
              <br>
              <br>
              <blockquote type="cite">Hi
                <br>
                <br>
                have you tried to add exceptions on the TSM client
                config file?
                <br>
              </blockquote>
              <br>
              Hey Luis,
              <br>
              <br>
              That would work as well (mechanically), however it's not
              elegant or
              <br>
              efficient. When you have over 1PB and 200M files on
              scratch it will
              <br>
              take many hours and several helper nodes to traverse that
              fileset just
              <br>
              to be negated by TSM. In fact exclusion on TSM are just as
              <br>
            </blockquote>
          </blockquote>
        </blockquote>
        inefficient.
        <br>
        <blockquote type="cite">
          <blockquote type="cite">
            <blockquote type="cite">Considering that I want to keep
              project and sysadmin on different
              <br>
              domains then it's much worst, since we have to traverse
              and exclude
              <br>
              scratch & (project|sysadmin) twice, once to capture
              sysadmin and again
              <br>
              to capture project.
              <br>
              <br>
              If I have to use exclusion rules it has to rely sole on
              gpfs rules,
              <br>
            </blockquote>
          </blockquote>
        </blockquote>
        and
        <br>
        <blockquote type="cite">
          <blockquote type="cite">
            <blockquote type="cite">somehow not traverse scratch at all.
              <br>
              <br>
              I suspect there is a way to do this properly, however the
              examples on
              <br>
              the gpfs guide and other references are not exhaustive.
              They only show
              <br>
              a couple of trivial cases.
              <br>
              <br>
              However my situation is not unique. I suspect there are
              may facilities
              <br>
              having to deal with backup of HUGE filesets.
              <br>
              <br>
              So the search is on.
              <br>
              <br>
              Thanks
              <br>
              Jaime
              <br>
              <br>
              <br>
              <br>
              <br>
              <blockquote type="cite">
                <br>
                Assuming your GPFS dir is /IBM/GPFS and your fileset to
                exclude is
                <br>
              </blockquote>
            </blockquote>
            linked
            <br>
            <blockquote type="cite">
              <blockquote type="cite">on /IBM/GPFS/FSET1
                <br>
                <br>
                dsm.sys
                <br>
                ...
                <br>
                <br>
                DOMAIN /IBM/GPFS
                <br>
                EXCLUDE.DIR /IBM/GPFS/FSET1
                <br>
                <br>
                <br>
                From:   "Jaime Pinto" <a class="moz-txt-link-rfc2396E" href="mailto:pinto@scinet.utoronto.ca"><pinto@scinet.utoronto.ca></a>
                <br>
                To:     "gpfsug main discussion list"
                <br>
              </blockquote>
            </blockquote>
            <a class="moz-txt-link-rfc2396E" href="mailto:gpfsug-discuss@spectrumscale.org"><gpfsug-discuss@spectrumscale.org></a>
            <br>
            <blockquote type="cite">
              <blockquote type="cite">Date:   17-05-17 23:44
                <br>
                Subject:        [gpfsug-discuss] mmbackup with fileset :
                scope errors
                <br>
                Sent by:        <a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss-bounces@spectrumscale.org">gpfsug-discuss-bounces@spectrumscale.org</a>
                <br>
                <br>
                <br>
                <br>
                I have a g200 /gpfs/sgfs1 filesystem with 3 filesets:
                <br>
                * project3
                <br>
                * scratch3
                <br>
                * sysadmin3
                <br>
                <br>
                I have no problems mmbacking up /gpfs/sgfs1 (or sgfs1),
                however we
                <br>
                have no need or space to include *scratch3* on TSM.
                <br>
                <br>
                Question: how to craft the mmbackup command to backup
                <br>
                /gpfs/sgfs1/project3 and/or /gpfs/sgfs1/sysadmin3 only?
                <br>
                <br>
                Below are 3 types of errors:
                <br>
                <br>
                1) mmbackup /gpfs/sgfs1/sysadmin3 -N tsm-helper1-ib0 -s
                /dev/shm
                <br>
                --tsm-errorlog $logfile -L 2
                <br>
                <br>
                ERROR: mmbackup: Options /gpfs/sgfs1/sysadmin3 and
                --scope filesystem
                <br>
                cannot be specified at the same time.
                <br>
                <br>
                2) mmbackup /gpfs/sgfs1/sysadmin3 -N tsm-helper1-ib0 -s
                /dev/shm
                <br>
                --scope inodespace --tsm-errorlog $logfile -L 2
                <br>
                <br>
                ERROR: Wed May 17 16:27:11 2017 mmbackup:mmbackup:
                Backing up
                <br>
                dependent fileset sysadmin3 is not supported
                <br>
                Wed May 17 16:27:11 2017 mmbackup:This fileset is not
                suitable for
                <br>
                fileset level backup.  exit 1
                <br>
                <br>
                3) mmbackup /gpfs/sgfs1/sysadmin3 -N tsm-helper1-ib0 -s
                /dev/shm
                <br>
                --scope filesystem --tsm-errorlog $logfile -L 2
                <br>
                <br>
                ERROR: mmbackup: Options /gpfs/sgfs1/sysadmin3 and
                --scope filesystem
                <br>
                cannot be specified at the same time.
                <br>
                <br>
                These examples don't really cover my case:
                <br>
                <br>
              </blockquote>
            </blockquote>
            <br>
          </blockquote>
          <br>
        </blockquote>
<a class="moz-txt-link-freetext" href="https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/bl1adm_mmbackup.htm#mmbackup__mmbackup_examples">https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/bl1adm_mmbackup.htm#mmbackup__mmbackup_examples</a>
        <br>
        <br>
        <blockquote type="cite">
          <br>
          <blockquote type="cite">
            <br>
            <blockquote type="cite">
              <blockquote type="cite">
                <br>
                <br>
                Thanks
                <br>
                Jaime
                <br>
                <br>
                <br>
                         ************************************
                <br>
                          TELL US ABOUT YOUR SUCCESS STORIES
                <br>
                         <a class="moz-txt-link-freetext" href="http://www.scinethpc.ca/testimonials">http://www.scinethpc.ca/testimonials</a>
                <br>
                         ************************************
                <br>
                ---
                <br>
                Jaime Pinto
                <br>
                SciNet HPC Consortium - Compute/Calcul Canada
                <br>
                <a class="moz-txt-link-abbreviated" href="http://www.scinet.utoronto.ca">www.scinet.utoronto.ca</a> - <a class="moz-txt-link-abbreviated" href="http://www.computecanada.ca">www.computecanada.ca</a>
                <br>
                University of Toronto
                <br>
                661 University Ave. (MaRS), Suite 1140
                <br>
                Toronto, ON, M5G1M1
                <br>
                P: 416-978-2755
                <br>
                C: 416-505-1477
                <br>
                <br>
----------------------------------------------------------------
                <br>
                This message was sent using IMP at SciNet Consortium,
                University of
                <br>
                Toronto.
                <br>
                <br>
                _______________________________________________
                <br>
                gpfsug-discuss mailing list
                <br>
                gpfsug-discuss at spectrumscale.org
                <br>
                <a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
                <br>
                <br>
                <br>
                <br>
                <br>
                <br>
                Ellei edellä ole toisin mainittu: / Unless stated
                otherwise above:
                <br>
                Oy IBM Finland Ab
                <br>
                PL 265, 00101 Helsinki, Finland
                <br>
                Business ID, Y-tunnus: 0195876-3
                <br>
                Registered in Finland
                <br>
                <br>
              </blockquote>
              <br>
              <br>
              <br>
              <br>
              <br>
              <br>
                       ************************************
              <br>
                        TELL US ABOUT YOUR SUCCESS STORIES
              <br>
                       <a class="moz-txt-link-freetext" href="http://www.scinethpc.ca/testimonials">http://www.scinethpc.ca/testimonials</a>
              <br>
                       ************************************
              <br>
              ---
              <br>
              Jaime Pinto
              <br>
              SciNet HPC Consortium - Compute/Calcul Canada
              <br>
              <a class="moz-txt-link-abbreviated" href="http://www.scinet.utoronto.ca">www.scinet.utoronto.ca</a> - <a class="moz-txt-link-abbreviated" href="http://www.computecanada.ca">www.computecanada.ca</a>
              <br>
              University of Toronto
              <br>
              661 University Ave. (MaRS), Suite 1140
              <br>
              Toronto, ON, M5G1M1
              <br>
              P: 416-978-2755
              <br>
              C: 416-505-1477
              <br>
              <br>
----------------------------------------------------------------
              <br>
              This message was sent using IMP at SciNet Consortium,
              University of
              <br>
            </blockquote>
            Toronto.
            <br>
            <blockquote type="cite">
              <br>
              _______________________________________________
              <br>
              gpfsug-discuss mailing list
              <br>
              gpfsug-discuss at spectrumscale.org
              <br>
              <a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
              <br>
              <br>
            </blockquote>
            <br>
            <br>
            <br>
            <br>
            <br>
            <br>
                      ************************************
            <br>
                       TELL US ABOUT YOUR SUCCESS STORIES
            <br>
                      <a class="moz-txt-link-freetext" href="http://www.scinethpc.ca/testimonials">http://www.scinethpc.ca/testimonials</a>
            <br>
                      ************************************
            <br>
            ---
            <br>
            Jaime Pinto
            <br>
            SciNet HPC Consortium - Compute/Calcul Canada
            <br>
            <a class="moz-txt-link-abbreviated" href="http://www.scinet.utoronto.ca">www.scinet.utoronto.ca</a> - <a class="moz-txt-link-abbreviated" href="http://www.computecanada.ca">www.computecanada.ca</a>
            <br>
            University of Toronto
            <br>
            661 University Ave. (MaRS), Suite 1140
            <br>
            Toronto, ON, M5G1M1
            <br>
            P: 416-978-2755
            <br>
            C: 416-505-1477
            <br>
            <br>
----------------------------------------------------------------
            <br>
            This message was sent using IMP at SciNet Consortium,
            University of
            <br>
            Toronto.
            <br>
            <br>
            _______________________________________________
            <br>
            gpfsug-discuss mailing list
            <br>
            gpfsug-discuss at spectrumscale.org
            <br>
            <a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
            <br>
            <br>
            <br>
            <br>
            Ellei edellä ole toisin mainittu: / Unless stated otherwise
            above:
            <br>
            Oy IBM Finland Ab
            <br>
            PL 265, 00101 Helsinki, Finland
            <br>
            Business ID, Y-tunnus: 0195876-3
            <br>
            Registered in Finland
            <br>
            _______________________________________________
            <br>
            gpfsug-discuss mailing list
            <br>
            gpfsug-discuss at spectrumscale.org
            <br>
            <a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
            <br>
            <br>
            <br>
            <br>
            <br>
            <br>
          </blockquote>
          <br>
          <br>
          <br>
          <br>
          <br>
          <br>
                    ************************************
          <br>
                     TELL US ABOUT YOUR SUCCESS STORIES
          <br>
                    <a class="moz-txt-link-freetext" href="http://www.scinethpc.ca/testimonials">http://www.scinethpc.ca/testimonials</a>
          <br>
                    ************************************
          <br>
          ---
          <br>
          Jaime Pinto
          <br>
          SciNet HPC Consortium - Compute/Calcul Canada
          <br>
          <a class="moz-txt-link-abbreviated" href="http://www.scinet.utoronto.ca">www.scinet.utoronto.ca</a> - <a class="moz-txt-link-abbreviated" href="http://www.computecanada.ca">www.computecanada.ca</a>
          <br>
          University of Toronto
          <br>
          661 University Ave. (MaRS), Suite 1140
          <br>
          Toronto, ON, M5G1M1
          <br>
          P: 416-978-2755
          <br>
          C: 416-505-1477
          <br>
          <br>
----------------------------------------------------------------
          <br>
          This message was sent using IMP at SciNet Consortium,
          University of
          <br>
          Toronto.
          <br>
          <br>
          <br>
          <br>
          <br>
          <br>
          <br>
        </blockquote>
        <br>
        <br>
        <br>
        <br>
        <br>
        <br>
                  ************************************
        <br>
                   TELL US ABOUT YOUR SUCCESS STORIES
        <br>
                  <a class="moz-txt-link-freetext" href="http://www.scinethpc.ca/testimonials">http://www.scinethpc.ca/testimonials</a>
        <br>
                  ************************************
        <br>
        ---
        <br>
        Jaime Pinto
        <br>
        SciNet HPC Consortium - Compute/Calcul Canada
        <br>
        <a class="moz-txt-link-abbreviated" href="http://www.scinet.utoronto.ca">www.scinet.utoronto.ca</a> - <a class="moz-txt-link-abbreviated" href="http://www.computecanada.ca">www.computecanada.ca</a>
        <br>
        University of Toronto
        <br>
        661 University Ave. (MaRS), Suite 1140
        <br>
        Toronto, ON, M5G1M1
        <br>
        P: 416-978-2755
        <br>
        C: 416-505-1477
        <br>
        <br>
        ----------------------------------------------------------------
        <br>
        This message was sent using IMP at SciNet Consortium, University
        of
        <br>
        Toronto.
        <br>
        <br>
        <br>
        <br>
        <br>
        <br>
        <br>
      </blockquote>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
               ************************************
      <br>
                TELL US ABOUT YOUR SUCCESS STORIES
      <br>
               <a class="moz-txt-link-freetext" href="http://www.scinethpc.ca/testimonials">http://www.scinethpc.ca/testimonials</a>
      <br>
               ************************************
      <br>
      ---
      <br>
      Jaime Pinto
      <br>
      SciNet HPC Consortium - Compute/Calcul Canada
      <br>
      <a class="moz-txt-link-abbreviated" href="http://www.scinet.utoronto.ca">www.scinet.utoronto.ca</a> - <a class="moz-txt-link-abbreviated" href="http://www.computecanada.ca">www.computecanada.ca</a>
      <br>
      University of Toronto
      <br>
      661 University Ave. (MaRS), Suite 1140
      <br>
      Toronto, ON, M5G1M1
      <br>
      P: 416-978-2755
      <br>
      C: 416-505-1477
      <br>
      <br>
      ----------------------------------------------------------------
      <br>
      This message was sent using IMP at SciNet Consortium, University
      of Toronto.
      <br>
      <br>
      _______________________________________________
      <br>
      gpfsug-discuss mailing list
      <br>
      gpfsug-discuss at spectrumscale.org
      <br>
      <a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
      <br>
    </blockquote>
    <br>
    <div class="moz-signature">-- <br>
      <div>
        <font face="arial" color="#000000">
          <b>Jez Tucker</b><br>
          Head of Research and Development, Pixit Media<br>
          07764193820 <font color="#FF0000">|</font> <a
            href="mailto:jtucker@pixitmedia.com">jtucker@pixitmedia.com</a><br>
          <a href="http://www.pixitmedia.com">www.pixitmedia.com</a> <font
            color="#FF0000">|</font> <a
            href="https://twitter.com/PixitMedia">Tw:@pixitmedia.com</a><br>
        </font>
      </div>
    </div>
  </body>
</html>

<br>
<div><a href="http://pixitmedia.com/pixcache/" target="_blank"><img src="http://pixitmedia.com/sig/PixCache-banner.png"></a><br><font face="Arial, Helvetica, sans-serif" size="1">This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email.</font></div>