<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>I'd love to see your fstab to see how you're doing that bind
      mount. <br>
      Do you use systemd?<br>
      What cluster manager are you using?<br>
    </p>
    <pre class="moz-signature" cols="72">
</pre>
    <div class="moz-cite-prefix">On 2/22/22 15:21, Maloney, J.D. wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:CH0PR11MB529963C66BD494F5607E74CCBC3B9@CH0PR11MB5299.namprd11.prod.outlook.com">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      <meta name="Generator" content="Microsoft Word 15 (filtered
        medium)">
      <style>@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0in;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:blue;
        text-decoration:underline;}span.EmailStyle19
        {mso-style-type:personal-reply;
        font-family:"Calibri",sans-serif;
        color:windowtext;}.MsoChpDefault
        {mso-style-type:export-only;
        font-size:10.0pt;}div.WordSection1
        {page:WordSection1;}</style>
      <div class="WordSection1">
        <p class="MsoNormal">Our Puppet/Ansible GPFS modules/playbooks
          handle this sequencing for us (we use bind mounts for things
          like u, projects, and scratch also).  Like Skylar mentioned
          page pool allocation, quorum checking, and cluster arbitration
          have to come before a mount of the FS so that time you
          mentioned doesn’t seem totally off to me.  We just make the
          creation of the bind mounts dependent on the actual GPFS mount
          occurring in the configuration management tooling which has
          worked out well for us in that regard.  <o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p class="MsoNormal">Best,<o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <div>
          <p class="MsoNormal"><span
              style="font-size:10.5pt;color:black">J.D. Maloney<o:p></o:p></span></p>
          <div>
            <p class="MsoNormal"><span
                style="font-size:10.5pt;color:black">Sr. HPC Storage
                Engineer | Storage Enabling Technologies Group<o:p></o:p></span></p>
          </div>
        </div>
        <p class="MsoNormal"><span style="font-size:10.5pt;color:black">National
            Center for Supercomputing Applications (NCSA)</span><o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <div style="border:none;border-top:solid #B5C4DF
          1.0pt;padding:3.0pt 0in 0in 0in">
          <p class="MsoNormal" style="margin-bottom:12.0pt"><b><span
                style="font-size:12.0pt;color:black">From:
              </span></b><span style="font-size:12.0pt;color:black"><a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss-bounces@spectrumscale.org">gpfsug-discuss-bounces@spectrumscale.org</a>
              <a class="moz-txt-link-rfc2396E" href="mailto:gpfsug-discuss-bounces@spectrumscale.org"><gpfsug-discuss-bounces@spectrumscale.org></a> on behalf
              of Skylar Thompson <a class="moz-txt-link-rfc2396E" href="mailto:skylar2@uw.edu"><skylar2@uw.edu></a><br>
              <b>Date: </b>Tuesday, February 22, 2022 at 2:13 PM<br>
              <b>To: </b><a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss@spectrumscale.org">gpfsug-discuss@spectrumscale.org</a>
              <a class="moz-txt-link-rfc2396E" href="mailto:gpfsug-discuss@spectrumscale.org"><gpfsug-discuss@spectrumscale.org></a><br>
              <b>Subject: </b>Re: [gpfsug-discuss] How to do multiple
              mounts via GPFS<o:p></o:p></span></p>
        </div>
        <div>
          <p class="MsoNormal">The problem might be that the service
            indicates success when mmstartup<br>
            returns rather than when the mount is actually active
            (requires quorum<br>
            checking, arbitration, etc.). A couple tricks I can think of
            would be using<br>
            ConditionPathIsMountPoint from systemd.unit[1], or maybe
            adding a<br>
            callback[2] that triggers on the mount condition for your
            filesystem that<br>
            makes the bind mount rather than systemd.<br>
            <br>
            [1] <a
href="https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2Fwww.freedesktop.org%2Fsoftware%2Fsystemd%2Fman%2Fsystemd.unit.html*ConditionPathIsMountPoint%3D__%3BIw!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv4xJQwzZ%24&data=04%7C01%7Cjcantrell1%40gsu.edu%7C94e150c57b034356a37008d9f645b5e9%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C637811602709016503%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=OiVg7qD5RPLMX3TbinijYhxPCGTsluZdORvidpye%2FNM%3D&reserved=0"
originalsrc="https://urldefense.com/v3/__https:/www.freedesktop.org/software/systemd/man/systemd.unit.html*ConditionPathIsMountPoint=__;Iw!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv4xJQwzZ$"
shash="x4F6QsUTlM9RTB5sxZkZNe4vFa502+98W9b9zWrhZU8JHXSmPbM94Sa173NvcVenZpZe+yN2pNZ+iGO3Q3nU70mE674e2SYKzFggZTPX+Yv0uVXr16BeFNh4z7wzlyJjERQZEc4htbVbU2ncEh73m1no4MAY1xEchJj07lXTPYk="
              moz-do-not-send="true">
https://urldefense.com/v3/__https://www.freedesktop.org/software/systemd/man/systemd.unit.html*ConditionPathIsMountPoint=__;Iw!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv4xJQwzZ$</a>
            <br>
            [2] <a
href="https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2Fwww.ibm.com%2Fdocs%2Fen%2Fspectrum-scale%2F5.1.2%3Ftopic%3Dreference-mmaddcallback-command__%3B!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv3f90Gia%24&data=04%7C01%7Cjcantrell1%40gsu.edu%7C94e150c57b034356a37008d9f645b5e9%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C637811602709016503%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=mGxwgwT8LEPoZc1Mh1%2FDaroBQSoWGv6Np9KtoAh7ms8%3D&reserved=0"
originalsrc="https://urldefense.com/v3/__https:/www.ibm.com/docs/en/spectrum-scale/5.1.2?topic=reference-mmaddcallback-command__;!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv3f90Gia$"
shash="zAyC188h7s0iYmEW7F/Kz558rZ4j5k1X7JEbCHo7/r+/TZVWXdcmMO7ir1N5JX1WB1hZmz0Z+bYvYcPyVn6wzhk5QBKHuHfPTYYb876kf66/NgAmlasUoS5SbiPG1R5D17KOazD52ncGxormxg3NMPjFOKrq3dAXzb4LIEDCZV8="
              moz-do-not-send="true">
https://urldefense.com/v3/__https://www.ibm.com/docs/en/spectrum-scale/5.1.2?topic=reference-mmaddcallback-command__;!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv3f90Gia$</a>
            <br>
            <br>
            These are both on our todo list for improving our own GPFS
            mounting as we<br>
            have problems with our job scheduler not starting reliably
            on reboot, but<br>
            for us we can have Puppet start it on the next run so it
            just means nodes<br>
            might not return to service for 30 minutes or so.<br>
            <br>
            On Tue, Feb 22, 2022 at 03:05:58PM -0500, Justin Cantrell
            wrote:<br>
            > This is how we're currently solving this problem, with
            systemd timer and<br>
            > mount. None of the requires seem to work with gpfs
            since it starts so late.<br>
            > I would like a better solution.<br>
            > <br>
            > Is it normal for gpfs to start so late?? I think it
            doesn't mount until<br>
            > after the gpfs.service starts, and even then it's 20-30
            seconds.<br>
            > <br>
            > <br>
            > On 2/22/22 14:42, Skylar Thompson wrote:<br>
            > > Like Tina, we're doing bind mounts in autofs. I
            forgot that there might be<br>
            > > a race condition if you're doing it in fstab. If
            you're on system with systemd,<br>
            > > another option might be to do this directly with
            systemd.mount rather than<br>
            > > let the fstab generator make the systemd.mount
            units:<br>
            > > <br>
            > > <a
href="https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2Fnam11.safelinks.protection.outlook.com%2F%3Furl%3Dhttps*3A*2F*2Fwww.freedesktop.org*2Fsoftware*2Fsystemd*2Fman*2Fsystemd.mount.html%26data%3D04*7C01*7Cjcantrell1*40gsu.edu*7C2a65cd0ddefd48cb81a308d9f63bb840*7C515ad73d8d5e4169895c9789dc742a70*7C0*7C0*7C637811559082622923*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000%26sdata%3D*2BWWD7cCNSMeJEYwELldYT3pLdXVX3AxJj7gqZQCqUv4*3D%26reserved%3D0__%3BJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv0tqF9rU%24&data=04%7C01%7Cjcantrell1%40gsu.edu%7C94e150c57b034356a37008d9f645b5e9%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C637811602709016503%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=7XXSUyYlMFqvXDCZwfhOM0%2F2%2BRDwsTZVR4kVxl1Pr0s%3D&reserved=0"
originalsrc="https://urldefense.com/v3/__https:/nam11.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.freedesktop.org*2Fsoftware*2Fsystemd*2Fman*2Fsystemd.mount.html&data=04*7C01*7Cjcantrell1*40gsu.edu*7C2a65cd0ddefd48cb81a308d9f63bb840*7C515ad73d8d5e4169895c9789dc742a70*7C0*7C0*7C637811559082622923*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000&sdata=*2BWWD7cCNSMeJEYwELldYT3pLdXVX3AxJj7gqZQCqUv4*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUl!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv0tqF9rU$"
shash="unohA1mB63/NHVHRNlCOfoecI7oI1xdi3Izb979WXSMJiUzgLMhLhANxW0X7cPBEXJ7nK8uY7v9Bhjj/0kqWqKcKrWioDB/bN7HiDXpwlMCMlu40O47fEIoGKPd1xXrlynFHjPl3sMfVBvGXgkpZF9MTZOYRzxHfvUoI/HWUvGU="
              moz-do-not-send="true">
https://urldefense.com/v3/__https://nam11.safelinks.protection.outlook.com/?url=https*3A*2F*2Fwww.freedesktop.org*2Fsoftware*2Fsystemd*2Fman*2Fsystemd.mount.html&amp;data=04*7C01*7Cjcantrell1*40gsu.edu*7C2a65cd0ddefd48cb81a308d9f63bb840*7C515ad73d8d5e4169895c9789dc742a70*7C0*7C0*7C637811559082622923*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000&amp;sdata=*2BWWD7cCNSMeJEYwELldYT3pLdXVX3AxJj7gqZQCqUv4*3D&amp;reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUl!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv0tqF9rU$</a>
            <br>
            > > <br>
            > > You could then set RequiresMountFor=gpfs1.mount in
            the bind mount unit.<br>
            > > <br>
            > > On Tue, Feb 22, 2022 at 02:23:53PM -0500, Justin
            Cantrell wrote:<br>
            > > > I tried a bind mount, but perhaps I'm doing
            it wrong. The system fails<br>
            > > > to boot because gpfs doesn't start until too
            late in the boot process.<br>
            > > > In fact, the system boots and the gpfs1
            partition isn't available for a<br>
            > > > good 20-30 seconds.<br>
            > > > <br>
            > > > /gfs1/home??? /home??? none???? bind<br>
            > > > I've tried adding mount options of
            x-systemd-requires=gpfs1, noauto.<br>
            > > > The noauto lets it boot, but the mount is
            never mounted properly. Doing<br>
            > > > a manual mount -a mounts it.<br>
            > > > <br>
            > > > On 2/22/22 12:37, Skylar Thompson wrote:<br>
            > > > > Assuming this is on Linux, you ought to
            be able to use bind mounts for<br>
            > > > > that, something like this in fstab or
            equivalent:<br>
            > > > > <br>
            > > > > /home /gpfs1/home bind defaults 0 0<br>
            > > > > <br>
            > > > > On Tue, Feb 22, 2022 at 12:24:09PM
            -0500, Justin Cantrell wrote:<br>
            > > > > > We're trying to mount multiple
            mounts at boot up via gpfs.<br>
            > > > > > We can mount the main gpfs mount
            /gpfs1, but would like to mount things<br>
            > > > > > like:<br>
            > > > > > /home /gpfs1/home<br>
            > > > > > /other /gpfs1/other<br>
            > > > > > /stuff /gpfs1/stuff<br>
            > > > > > <br>
            > > > > > But adding that to fstab doesn't
            work, because from what I understand,<br>
            > > > > > that's not how gpfs works with
            mounts.<br>
            > > > > > What's the standard way to
            accomplish something like this?<br>
            > > > > > We've used systemd timers/mounts to
            accomplish it, but that's not ideal.<br>
            > > > > > Is there a way to do this natively
            with gpfs or does this have to be done<br>
            > > > > > through symlinks or gpfs over nfs?<br>
            > > >
            _______________________________________________<br>
            > > > gpfsug-discuss mailing list<br>
            > > > gpfsug-discuss at spectrumscale.org<br>
            > > > <a
href="https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2Fnam11.safelinks.protection.outlook.com%2F%3Furl%3Dhttp*3A*2F*2Fgpfsug.org*2Fmailman*2Flistinfo*2Fgpfsug-discuss%26data%3D04*7C01*7Cjcantrell1*40gsu.edu*7C2a65cd0ddefd48cb81a308d9f63bb840*7C515ad73d8d5e4169895c9789dc742a70*7C0*7C0*7C637811559082622923*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000%26sdata%3DF4oXAT0zdY*2BS1mR784ZGghUt0G*2F6Ofu36MfJ9WnPsPM*3D%26reserved%3D0__%3BJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv5uX7C9S%24&data=04%7C01%7Cjcantrell1%40gsu.edu%7C94e150c57b034356a37008d9f645b5e9%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C637811602709016503%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=3g5BnkVPqUP25iX87eaSxI%2BKEZwgyv6BGQ5r3cRfrj0%3D&reserved=0"
originalsrc="https://urldefense.com/v3/__https:/nam11.safelinks.protection.outlook.com/?url=http*3A*2F*2Fgpfsug.org*2Fmailman*2Flistinfo*2Fgpfsug-discuss&data=04*7C01*7Cjcantrell1*40gsu.edu*7C2a65cd0ddefd48cb81a308d9f63bb840*7C515ad73d8d5e4169895c9789dc742a70*7C0*7C0*7C637811559082622923*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000&sdata=F4oXAT0zdY*2BS1mR784ZGghUt0G*2F6Ofu36MfJ9WnPsPM*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUl!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv5uX7C9S$"
shash="d/8IDQ8ELtiUi59lCGr15EU2NZtVh6kfahlfuifUtD5Ih5tLIpNiHAfc4nYsFouyU9IIAghWPcnUCIxid1nHz28Zwx+zwuWtE2ELGBYTDuWoLkso95mkRBkc1hrHRNIi2kOrz6b+1woIrLp1Xv6vB2fMlgi3AjXkOq4/onlrxKs="
              moz-do-not-send="true">
https://urldefense.com/v3/__https://nam11.safelinks.protection.outlook.com/?url=http*3A*2F*2Fgpfsug.org*2Fmailman*2Flistinfo*2Fgpfsug-discuss&amp;data=04*7C01*7Cjcantrell1*40gsu.edu*7C2a65cd0ddefd48cb81a308d9f63bb840*7C515ad73d8d5e4169895c9789dc742a70*7C0*7C0*7C637811559082622923*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000&amp;sdata=F4oXAT0zdY*2BS1mR784ZGghUt0G*2F6Ofu36MfJ9WnPsPM*3D&amp;reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUl!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv5uX7C9S$</a>
            <br>
            > _______________________________________________<br>
            > gpfsug-discuss mailing list<br>
            > gpfsug-discuss at spectrumscale.org<br>
            > <a
href="https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__http%3A%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss__%3B!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv34vkiw2%24&data=04%7C01%7Cjcantrell1%40gsu.edu%7C94e150c57b034356a37008d9f645b5e9%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C637811602709016503%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=fVvoPrvQI%2BDCor7NMvdhmpTTZGd4WJIn0OEjSpNVZtg%3D&reserved=0"
originalsrc="https://urldefense.com/v3/__http:/gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv34vkiw2$"
shash="eW0sk9kwZe4CafkF6kvb8RjpiSbQmY6ldOiAZ2HOQRTQ4CMfh0jZDLuTEecwiTKmrSw3rVt0vkG0GX2B956LA5G2tELDff97riTWfs3SSmfKvUsKXJTObbgqpinvVwgSA7v4gB6h7VSS5SxtLH+RaJYOcfjTDw3+uyUP+iePjSs="
              moz-do-not-send="true">
https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv34vkiw2$</a>
            <br>
            <br>
            -- <br>
            -- Skylar Thompson (<a class="moz-txt-link-abbreviated" href="mailto:skylar2@u.washington.edu">skylar2@u.washington.edu</a>)<br>
            -- Genome Sciences Department (UW Medicine), System
            Administrator<br>
            -- Foege Building S046, (206)-685-7354<br>
            -- Pronouns: He/Him/His<br>
            _______________________________________________<br>
            gpfsug-discuss mailing list<br>
            gpfsug-discuss at spectrumscale.org<br>
            <a
href="https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__http%3A%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss__%3B!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv34vkiw2%24&data=04%7C01%7Cjcantrell1%40gsu.edu%7C94e150c57b034356a37008d9f645b5e9%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C637811602709016503%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=fVvoPrvQI%2BDCor7NMvdhmpTTZGd4WJIn0OEjSpNVZtg%3D&reserved=0"
originalsrc="https://urldefense.com/v3/__http:/gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv34vkiw2$"
shash="eW0sk9kwZe4CafkF6kvb8RjpiSbQmY6ldOiAZ2HOQRTQ4CMfh0jZDLuTEecwiTKmrSw3rVt0vkG0GX2B956LA5G2tELDff97riTWfs3SSmfKvUsKXJTObbgqpinvVwgSA7v4gB6h7VSS5SxtLH+RaJYOcfjTDw3+uyUP+iePjSs="
              moz-do-not-send="true">https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!DZ3fjg!vTT86FG0CVqF6KsdQdq6n66YOYiOPr6K2MrdTnqc2vnVduE1uhiO8VJcWTqzv34vkiw2$</a>
            <o:p></o:p></p>
        </div>
      </div>
      <br>
      <fieldset class="moz-mime-attachment-header"></fieldset>
      <pre class="moz-quote-pre" wrap="">_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
<a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
</pre>
    </blockquote>
  </body>
</html>