<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">
Hi All,
<div class=""><br class="">
</div>
<div class="">Aargh - now I really do feel like an idiot! I had set up the stanza file over a week ago … then had to work on production issues … and completely forgot about setting the block size in the pool stanzas there. But at least we all now know that
stanza files override command line arguments to mmcrfs.</div>
<div class=""><br class="">
</div>
<div class="">My apologies…</div>
<div class=""><br class="">
</div>
<div class="">Kevin<br class="">
<div><br class="">
<blockquote type="cite" class="">
<div class="">On Aug 3, 2018, at 1:01 AM, Olaf Weiser <<a href="mailto:olaf.weiser@de.ibm.com" class="">olaf.weiser@de.ibm.com</a>> wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<div dir="auto" class="">Can u share your stanza file ?
<div class=""><br class="">
<div class="">Von meinem iPhone gesendet</div>
<div class=""><br class="">
Am 02.08.2018 um 23:15 schrieb Buterbaugh, Kevin L <<a href="mailto:Kevin.Buterbaugh@Vanderbilt.Edu" class="">Kevin.Buterbaugh@Vanderbilt.Edu</a>>:<br class="">
<br class="">
</div>
<blockquote type="cite" class="">
<div class="">OK, so hold on … NOW what’s going on??? I deleted the filesystem … went to lunch … came back an hour later … recreated the filesystem with a metadata block size of 4 MB … and I STILL have a 1 MB block size in the system pool and the wrong fragment
size in other pools…
<div class=""><br class="">
</div>
<div class="">Kevin<br class="">
<div class=""><br class="">
</div>
<div class="">
<div class="">/root/gpfs</div>
<div class="">root@testnsd1# mmdelfs gpfs5</div>
<div class="">All data on the following disks of gpfs5 will be destroyed:</div>
<div class=""> test21A3nsd</div>
<div class=""> test21A4nsd</div>
<div class=""> test21B3nsd</div>
<div class=""> test21B4nsd</div>
<div class=""> test23Ansd</div>
<div class=""> test23Bnsd</div>
<div class=""> test23Cnsd</div>
<div class=""> test24Ansd</div>
<div class=""> test24Bnsd</div>
<div class=""> test24Cnsd</div>
<div class=""> test25Ansd</div>
<div class=""> test25Bnsd</div>
<div class=""> test25Cnsd</div>
<div class="">Completed deletion of file system /dev/gpfs5.</div>
<div class="">mmdelfs: Propagating the cluster configuration data to all</div>
<div class=""> affected nodes. This is an asynchronous process.</div>
<div class="">/root/gpfs</div>
<div class="">root@testnsd1# mmcrfs gpfs5 -F ~/gpfs/gpfs5.stanza -A yes -B 4M -E yes -i 4096 -j scatter -k all -K whenpossible -m 2 -M 3 -n 32 -Q yes -r 1 -R 3 -T /gpfs5 -v yes --nofilesetdf --metadata-block-size 4M</div>
<div class=""><br class="">
</div>
<div class="">The following disks of gpfs5 will be formatted on node testnsd3:</div>
<div class=""> test21A3nsd: size 953609 MB</div>
<div class=""> test21A4nsd: size 953609 MB</div>
<div class=""> test21B3nsd: size 953609 MB</div>
<div class=""> test21B4nsd: size 953609 MB</div>
<div class=""> test23Ansd: size 15259744 MB</div>
<div class=""> test23Bnsd: size 15259744 MB</div>
<div class=""> test23Cnsd: size 1907468 MB</div>
<div class=""> test24Ansd: size 15259744 MB</div>
<div class=""> test24Bnsd: size 15259744 MB</div>
<div class=""> test24Cnsd: size 1907468 MB</div>
<div class=""> test25Ansd: size 15259744 MB</div>
<div class=""> test25Bnsd: size 15259744 MB</div>
<div class=""> test25Cnsd: size 1907468 MB</div>
<div class="">Formatting file system ...</div>
<div class="">Disks up to size 8.29 TB can be added to storage pool system.</div>
<div class="">Disks up to size 16.60 TB can be added to storage pool raid1.</div>
<div class="">Disks up to size 132.62 TB can be added to storage pool raid6.</div>
<div class="">Creating Inode File</div>
<div class=""> 12 % complete on Thu Aug 2 13:16:26 2018</div>
<div class=""> 25 % complete on Thu Aug 2 13:16:31 2018</div>
<div class=""> 38 % complete on Thu Aug 2 13:16:36 2018</div>
<div class=""> 50 % complete on Thu Aug 2 13:16:41 2018</div>
<div class=""> 62 % complete on Thu Aug 2 13:16:46 2018</div>
<div class=""> 74 % complete on Thu Aug 2 13:16:52 2018</div>
<div class=""> 85 % complete on Thu Aug 2 13:16:57 2018</div>
<div class=""> 96 % complete on Thu Aug 2 13:17:02 2018</div>
<div class=""> 100 % complete on Thu Aug 2 13:17:03 2018</div>
<div class="">Creating Allocation Maps</div>
<div class="">Creating Log Files</div>
<div class=""> 3 % complete on Thu Aug 2 13:17:09 2018</div>
<div class=""> 28 % complete on Thu Aug 2 13:17:15 2018</div>
<div class=""> 53 % complete on Thu Aug 2 13:17:20 2018</div>
<div class=""> 78 % complete on Thu Aug 2 13:17:26 2018</div>
<div class=""> 100 % complete on Thu Aug 2 13:17:27 2018</div>
<div class="">Clearing Inode Allocation Map</div>
<div class="">Clearing Block Allocation Map</div>
<div class="">Formatting Allocation Map for storage pool system</div>
<div class=""> 98 % complete on Thu Aug 2 13:17:34 2018</div>
<div class=""> 100 % complete on Thu Aug 2 13:17:34 2018</div>
<div class="">Formatting Allocation Map for storage pool raid1</div>
<div class=""> 52 % complete on Thu Aug 2 13:17:39 2018</div>
<div class=""> 100 % complete on Thu Aug 2 13:17:43 2018</div>
<div class="">Formatting Allocation Map for storage pool raid6</div>
<div class=""> 24 % complete on Thu Aug 2 13:17:48 2018</div>
<div class=""> 50 % complete on Thu Aug 2 13:17:53 2018</div>
<div class=""> 74 % complete on Thu Aug 2 13:17:58 2018</div>
<div class=""> 99 % complete on Thu Aug 2 13:18:03 2018</div>
<div class=""> 100 % complete on Thu Aug 2 13:18:03 2018</div>
<div class="">Completed creation of file system /dev/gpfs5.</div>
<div class="">mmcrfs: Propagating the cluster configuration data to all</div>
<div class=""> affected nodes. This is an asynchronous process.</div>
<div class="">/root/gpfs</div>
<div class="">root@testnsd1# mmlsfs gpfs5</div>
<div class="">flag value description</div>
<div class="">------------------- ------------------------ -----------------------------------</div>
<div class=""> -f 8192 Minimum fragment (subblock) size in bytes (system pool)</div>
<div class=""> 32768 Minimum fragment (subblock) size in bytes (other pools)</div>
<div class=""> -i 4096 Inode size in bytes</div>
<div class=""> -I 32768 Indirect block size in bytes</div>
<div class=""> -m 2 Default number of metadata replicas</div>
<div class=""> -M 3 Maximum number of metadata replicas</div>
<div class=""> -r 1 Default number of data replicas</div>
<div class=""> -R 3 Maximum number of data replicas</div>
<div class=""> -j scatter Block allocation type</div>
<div class=""> -D nfs4 File locking semantics in effect</div>
<div class=""> -k all ACL semantics in effect</div>
<div class=""> -n 32 Estimated number of nodes that will mount file system</div>
<div class=""> -B 1048576 Block size (system pool)</div>
<div class=""> 4194304 Block size (other pools)</div>
<div class=""> -Q user;group;fileset Quotas accounting enabled</div>
<div class=""> user;group;fileset Quotas enforced</div>
<div class=""> none Default quotas enabled</div>
<div class=""> --perfileset-quota No Per-fileset quota enforcement</div>
<div class=""> --filesetdf No Fileset df enabled?</div>
<div class=""> -V 19.01 (5.0.1.0) File system version</div>
<div class=""> --create-time Thu Aug 2 13:16:47 2018 File system creation time</div>
<div class=""> -z No Is DMAPI enabled?</div>
<div class=""> -L 33554432 Logfile size</div>
<div class=""> -E Yes Exact mtime mount option</div>
<div class=""> -S relatime Suppress atime mount option</div>
<div class=""> -K whenpossible Strict replica allocation option</div>
<div class=""> --fastea Yes Fast external attributes enabled?</div>
<div class=""> --encryption No Encryption enabled?</div>
<div class=""> --inode-limit 101095424 Maximum number of inodes</div>
<div class=""> --log-replicas 0 Number of log replicas</div>
<div class=""> --is4KAligned Yes is4KAligned?</div>
<div class=""> --rapid-repair Yes rapidRepair enabled?</div>
<div class=""> --write-cache-threshold 0 HAWC Threshold (max 65536)</div>
<div class=""> --subblocks-per-full-block 128 Number of subblocks per full block</div>
<div class=""> -P system;raid1;raid6 Disk storage pools in file system</div>
<div class=""> --file-audit-log No File Audit Logging enabled?</div>
<div class=""> --maintenance-mode No Maintenance Mode enabled?</div>
<div class=""> -d test21A3nsd;test21A4nsd;test21B3nsd;test21B4nsd;test23Ansd;test23Bnsd;test23Cnsd;test24Ansd;test24Bnsd;test24Cnsd;test25Ansd;test25Bnsd;test25Cnsd Disks in file system</div>
<div class=""> -A yes Automatic mount option</div>
<div class=""> -o none Additional mount options</div>
<div class=""> -T /gpfs5 Default mount point</div>
<div class=""> --mount-priority 0 Mount priority</div>
<div class="">/root/gpfs</div>
<div class="">root@testnsd1# </div>
</div>
<div class=""><br class="">
</div>
<div class="">
<div class=""><span style="font-variant-ligatures: no-common-ligatures; color: #ffffff" class=""><br class="">
</span></div>
<div class="">
<div class="">—</div>
<div class="">Kevin Buterbaugh - Senior System Administrator</div>
<div class="">Vanderbilt University - Advanced Computing Center for Research and Education</div>
<div class=""><a href="mailto:Kevin.Buterbaugh@vanderbilt.edu" class="">Kevin.Buterbaugh@vanderbilt.edu</a> - (615)875-9633</div>
<div class=""><br class="">
</div>
</div>
<div class=""><br class="">
<blockquote type="cite" class="">
<div class="">On Aug 2, 2018, at 3:31 PM, Buterbaugh, Kevin L <<a href="mailto:Kevin.Buterbaugh@Vanderbilt.Edu" class="">Kevin.Buterbaugh@Vanderbilt.Edu</a>> wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">
Hi All,
<div class=""><br class="">
</div>
<div class="">Thanks for all the responses on this, although I have the sneaking suspicion that the most significant thing that is going to come out of this thread is the knowledge that Sven has left IBM for DDN. ;-) or :-( or :-O depending on your perspective.</div>
<div class=""><br class="">
</div>
<div class="">Anyway … we have done some testing which has shown that a 4 MB block size is best for those workloads that use “normal” sized files. However, we - like many similar institutions - support a mixed workload, so the 128K fragment size that comes
with that is not optimal for the primarily biomedical type applications that literally create millions of very small files. That’s why we settled on 1 MB as a compromise.</div>
<div class=""><br class="">
</div>
<div class="">So we’re very eager to now test with GPFS 5, a 4 MB block size, and a 8K fragment size. I’m recreating my test cluster filesystem now with that config … so 4 MB block size on the metadata only system pool, too.</div>
<div class=""><br class="">
</div>
<div class="">Thanks to all who took the time to respond to this thread. I hope it’s been beneficial to others as well…</div>
<div class=""><br class="">
</div>
<div class="">Kevin</div>
<div class=""><br class="">
<div class="">
<div class="">—</div>
<div class="">Kevin Buterbaugh - Senior System Administrator</div>
<div class="">Vanderbilt University - Advanced Computing Center for Research and Education</div>
<div class=""><a href="mailto:Kevin.Buterbaugh@vanderbilt.edu" class="">Kevin.Buterbaugh@vanderbilt.edu</a> - (615)875-9633</div>
<div class=""><br class="">
</div>
</div>
<div class="">
<blockquote type="cite" class="">
<div class="">On Aug 1, 2018, at 7:11 PM, Andrew Beattie <<a href="mailto:abeattie@au1.ibm.com" class="">abeattie@au1.ibm.com</a>> wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<div class="socmaildefaultfont" dir="ltr" style="font-family:Arial, Helvetica, sans-serif;font-size:10.5pt">
<div dir="ltr" class="">I too would second the comment about doing testing specific to your environment</div>
<div dir="ltr" class=""> </div>
<div dir="ltr" class="">We recently deployed a number of ESS building blocks into a customer site that was specifically being used for a mixed HPC workload.</div>
<div dir="ltr" class=""> </div>
<div dir="ltr" class="">We spent more than a week playing with different block sizes for both data and metadata trying to identify which variation would provide the best mix of both metadata performance and data performance. one thing we noticed very early
on is that MDtest and IOR both respond very differently as you play with both block size and subblock size. What works for one use case may be a very poor option for another use case.</div>
<div dir="ltr" class=""> </div>
<div dir="ltr" class="">Interestingly enough it turned out that the best overall option for our particular use case was an 8MB block size with 32k sub blocks -- as that gave us good Metadata performance and good sequential data performance </div>
<div dir="ltr" class=""> </div>
<div dir="ltr" class="">which is probably why 32k sub block was the default for so many years ....</div>
<div dir="ltr" class="">
<div class="socmaildefaultfont" dir="ltr" style="font-family:Arial;font-size:10.5pt">
<div class="socmaildefaultfont" dir="ltr" style="font-family:Arial;font-size:10.5pt">
<div class="socmaildefaultfont" dir="ltr" style="font-family:Arial;font-size:10.5pt">
<div dir="ltr" style="margin-top: 20px;" class="">
<div style="font-size: 12pt; font-weight: bold; font-family: sans-serif; color: #7C7C5F;" class="">
Andrew Beattie</div>
<div style="font-size: 10pt; font-weight: bold; font-family: sans-serif;" class="">
Software Defined Storage - IT Specialist</div>
<div style="font-size: 8pt; font-family: sans-serif; margin-top: 10px;" class="">
<div class=""><span style="font-weight: bold; color: #336699;" class="">Phone: </span>
614-2133-7927</div>
<div class=""><span style="font-weight: bold; color: #336699;" class="">E-mail: </span>
<a href="mailto:abeattie@au1.ibm.com" style="color: #555" class="">abeattie@au1.ibm.com</a></div>
</div>
</div>
</div>
</div>
</div>
</div>
<div dir="ltr" class=""> </div>
<div dir="ltr" class=""> </div>
<blockquote data-history-content-modified="1" dir="ltr" style="border-left:solid #aaaaaa 2px; margin-left:5px; padding-left:5px; direction:ltr; margin-right:0px" class="">
----- Original message -----<br class="">
From: "Marc A Kaplan" <<a href="mailto:makaplan@us.ibm.com" class="">makaplan@us.ibm.com</a>><br class="">
Sent by: <a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" class="">gpfsug-discuss-bounces@spectrumscale.org</a><br class="">
To: gpfsug main discussion list <<a href="mailto:gpfsug-discuss@spectrumscale.org" class="">gpfsug-discuss@spectrumscale.org</a>><br class="">
Cc:<br class="">
Subject: Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?<br class="">
Date: Thu, Aug 2, 2018 10:01 AM<br class="">
<br class="">
<span style=" font-size:10pt;font-family:sans-serif" class="">Firstly, I do suggest that you run some tests and see how much, if any, difference the settings that are available make in performance and/or storage utilization.</span><br class="">
<br class="">
<span style=" font-size:10pt;font-family:sans-serif" class="">Secondly, as I and others have hinted at, deeper in the system, there may be additional parameters and settings. Sometimes they are available via commands, and/or configuration settings, sometimes
not.</span><br class="">
<br class="">
<span style=" font-size:10pt;font-family:sans-serif" class="">Sometimes that's just because we didn't want to overwhelm you or ourselves with yet more "tuning knobs".</span><br class="">
<br class="">
<span style=" font-size:10pt;font-family:sans-serif" class="">Sometimes it's because we made some component more tunable than we really needed, but did not make all the interconnected components equally or as widely tunable.</span><br class="">
<span style=" font-size:10pt;font-family:sans-serif" class="">Sometimes it's because we want to save you from making ridiculous settings that would lead to problems...</span><br class="">
<br class="">
<span style=" font-size:10pt;font-family:sans-serif" class="">OTOH, as I wrote before, if a burning requirement surfaces, things may change from release to release... Just as for so many years subblocks per block seemed forever frozen at the number 32. Now
it varies... and then the discussion shifts to why can't it be even more flexible?</span><br class="">
<br class="">
<div class=""><font face="Default Monospace,Courier New,Courier,monospace" size="2" class="">_______________________________________________<br class="">
gpfsug-discuss mailing list<br class="">
gpfsug-discuss at <a href="https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fspectrumscale.org&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C050353d8d80b4e272ab708d5f8b70361%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636688387286266248&sdata=33lN3yKin9et0lnbjMFDVEeSDSf3rmwDQu%2BsvheTeB8%3D&reserved=0" originalsrc="http://spectrumscale.org" shash="fk8ffnYtc779kK0HSXWRPCPhLMPLo55oOiIt7hQ13YtrRTBdwueqEdxRkc5ervteDP2AvJTF8cgqL5PQF1CxRjZYHLX5TD1KothVtRIF5aDl/cn9y1m5hhRCxY/KVggEAJxE+3DJPmuZ/zrSxEde/7opQQGO2bnWrumFk5/T16A=" class="">
spectrumscale.org</a><br class="">
<a href="https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cb821b9e8a6db4408fff308d5f80c907d%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636687655210056012&sdata=SCzz05SABDQ0vxprDYfdKGOY1VES%2Fm0tIr2kRnGlY4c%3D&reserved=0" originalsrc="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" shash="jNcv1QBTK1oeKa2JBcydCEunkvUKhLU34tJvhYmrJvdKc4rKgr1bv7ScNv+0cI1wR3zbRqXuiOptEtGNhE6BYs+ukrPeTyeWdZ1VhSfDXPZQXmsw0PRgvWT7LfgtWNIKVLh2C8aTI+1GrMY3GowKA0LNuv5qq7cRK8n8lkpJgQA=" target="_blank" class="">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></font></div>
</blockquote>
<div dir="ltr" class=""> </div>
</div>
<br class="">
_______________________________________________<br class="">
gpfsug-discuss mailing list<br class="">
gpfsug-discuss at <a href="https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fspectrumscale.org&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C050353d8d80b4e272ab708d5f8b70361%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636688387286266248&sdata=33lN3yKin9et0lnbjMFDVEeSDSf3rmwDQu%2BsvheTeB8%3D&reserved=0" originalsrc="http://spectrumscale.org" shash="fk8ffnYtc779kK0HSXWRPCPhLMPLo55oOiIt7hQ13YtrRTBdwueqEdxRkc5ervteDP2AvJTF8cgqL5PQF1CxRjZYHLX5TD1KothVtRIF5aDl/cn9y1m5hhRCxY/KVggEAJxE+3DJPmuZ/zrSxEde/7opQQGO2bnWrumFk5/T16A=" class="">
spectrumscale.org</a><br class="">
<a href="https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cb821b9e8a6db4408fff308d5f80c907d%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636687655210056012&sdata=SCzz05SABDQ0vxprDYfdKGOY1VES%2Fm0tIr2kRnGlY4c%3D&reserved=0" class="">https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cb821b9e8a6db4408fff308d5f80c907d%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636687655210056012&sdata=SCzz05SABDQ0vxprDYfdKGOY1VES%2Fm0tIr2kRnGlY4c%3D&reserved=0</a><br class="">
</div>
</blockquote>
</div>
<br class="">
</div>
</div>
_______________________________________________<br class="">
gpfsug-discuss mailing list<br class="">
gpfsug-discuss at <a href="https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fspectrumscale.org&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C89b5017f862b465a9ee908d5f9069a29%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636688729119833832&sdata=uqa0CPRESDEdVBy0XsAgqq5zmBcHttAfOJe2NGCGgdk%3D&reserved=0" originalsrc="http://spectrumscale.org" shash="RkIzcjPQg/bDIqzgT07oAtA/qdUyZnNa9S8rU5QsSRVDnx679H3EMS/O7rBrA5QEXOfJH4tSNQtFH0PEzboicxrgppt68Fx3t31gEcL9O4Pudyyth8amLFBoNS028zOPXoqTOQCV5VPbLqjjCdJKcjA4IqigJ8/HpD/EKv1eLmc=" class="">
spectrumscale.org</a><br class="">
<a href="https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C050353d8d80b4e272ab708d5f8b70361%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636688387286266248&sdata=d1rBsXZEn1BlkmvHGKHvkk2%2FWmXAppqS5SbOQF0ZCrY%3D&reserved=0" class="">https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C050353d8d80b4e272ab708d5f8b70361%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636688387286266248&sdata=d1rBsXZEn1BlkmvHGKHvkk2%2FWmXAppqS5SbOQF0ZCrY%3D&reserved=0</a><br class="">
</div>
</blockquote>
</div>
<br class="">
</div>
</div>
</div>
</blockquote>
</div>
<br class="">
</div>
_______________________________________________<br class="">
gpfsug-discuss mailing list<br class="">
gpfsug-discuss at <a href="http://spectrumscale.org" class="">spectrumscale.org</a><br class="">
<a href="https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C89b5017f862b465a9ee908d5f9069a29%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636688729119843837&sdata=0vjRu2TsZ5%2Bf84Sb7%2BTEdi8%2BmLGGpbqq%2FXNg2zfJRiw%3D&reserved=0" class="">https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C89b5017f862b465a9ee908d5f9069a29%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636688729119843837&sdata=0vjRu2TsZ5%2Bf84Sb7%2BTEdi8%2BmLGGpbqq%2FXNg2zfJRiw%3D&reserved=0</a><br class="">
</div>
</blockquote>
</div>
<br class="">
</div>
</body>
</html>