<font size=2 face="sans-serif">Hi,</font><br><br><font size=2 color=blue face="sans-serif">>>So the problem was
some bad ib routing. We changed some ib links, and then we got also 12GB/s
read with nsdperf.</font><br><font size=2 color=blue face="sans-serif">>>On our clients we
then are able to achieve the 7,2GB/s in total we also saw using the nsd
servers!</font><br><br><font size=2 face="sans-serif">This is good to hear.</font><br><br><font size=2 color=blue face="sans-serif">>> We are now running
some tests with different blocksizes and parameters, because our backend
storage is able to do more than the 7.2GB/s we get with GPFS (more like
14GB/s in total). I guess prefetchthreads and nsdworkerthreads are the
ones to look at?</font><br><br><font size=2 face="sans-serif">If you are on 4.2.0.3 or higher, you
can use workerThreads config paramter (start with value of 128, and increase
in increments of 128 until MAX supported) and this setting will auto adjust
values for other parameters such as prefetchThreads, worker3Threads
etc.</font><br><br><a href="https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Tuning%20Parameters"><font size=2 color=blue face="sans-serif">https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Tuning%20Parameters</font></a><br><br><font size=2 face="Arial">In addition to trying larger file-system
block-size (e.g. 4MiB or higher such that is aligns with storage volume
RAID-stripe-width) and config parameters (e.g , workerThreads, ignorePrefetchLUNCount),
it will be good to assess the "backend storage" performance for
random I/O access pattern (with block I/O sizes in units of FS block-size)
as this is more likely I/O scenario that the backend storage will experience
when you have many GPFS nodes performing I/O simultaneously to the file-system
(in production environment). </font><br><br><font size=2 face="Arial">mmcrfs has option "[-j {cluster | scatter}]".
"-j scatter" would be recommended for consistent file-system
performance over the lifetime of the file-system but then "-j scatter"
will result in random I/O to backend storage (even though application
is performing sequential I/O). For your test purposes, you may assess the
GPFS file-system performance by mmcrfs with "-j cluster" and
you may see good sequential results (compared to -j scatter) for lower
client counts but as you scale the client counts the combined workload
can result in "-j scatter" to backend storage (limiting the FS
performance to random I/O performance of the backend storage).</font><br><br><font size=2 face="sans-serif">[snip from mmcrfs]</font><br><font size=2 color=blue face="Courier New"><b><i>layoutMap={scatter</i></b><i><b>|</i></b><i> <b>cluster}</i></b></font><br><font size=2 color=blue face="Courier New"><i>
Specifies the block allocation
map type. When</i></font><br><font size=2 color=blue face="Courier New"><i>
allocating blocks for a given
file, GPFS first</i></font><br><font size=2 color=blue face="Courier New"><i>
uses a round$B!>(Brobin algorithm
to spread the data</i></font><br><font size=2 color=blue face="Courier New"><i>
across all disks in the storage
pool. After a</i></font><br><font size=2 color=blue face="Courier New"><i>
disk is selected, the location
of the data</i></font><br><font size=2 color=blue face="Courier New"><i>
block on the disk is determined
by the block</i></font><br><font size=2 color=blue face="Courier New"><i>
allocation map type. If <b>cluster</i></b><i>is</i></font><br><font size=2 color=blue face="Courier New"><i>
specified, GPFS attempts to allocate
blocks in</i></font><br><font size=2 color=blue face="Courier New"><i>
clusters. Blocks that belong
to a particular</i></font><br><font size=2 color=blue face="Courier New"><i>
file are kept adjacent to each
other within</i></font><br><font size=2 color=blue face="Courier New"><i>
each cluster. If <b>scatter</i></b><i>is specified,</i></font><br><font size=2 color=blue face="Courier New"><i>
the location of the block is
chosen randomly.</i></font><br><br><font size=2 color=blue face="Courier New"><i>
The <b>cluster</i></b><i> allocation
method may provide</i></font><br><font size=2 color=blue face="Courier New"><i>
better disk performance for some
disk</i></font><br><font size=2 color=blue face="Courier New"><i>
subsystems in relatively small
installations.</i></font><br><font size=2 color=blue face="Courier New"><i>
The benefits of clustered block
allocation</i></font><br><font size=2 color=blue face="Courier New"><i>
diminish when the number of nodes
in the</i></font><br><font size=2 color=blue face="Courier New"><i>
cluster or the number of disks
in a file system</i></font><br><font size=2 color=blue face="Courier New"><i>
increases, or when the file system$B!G(Bs
free space</i></font><br><font size=2 color=blue face="Courier New"><i>
becomes fragmented. The <b>cluster</i></b></font><br><font size=2 color=blue face="Courier New"><i>
allocation method is the default
for GPFS</i></font><br><font size=2 color=blue face="Courier New"><i>
clusters with eight or fewer
nodes and for file</i></font><br><font size=2 color=blue face="Courier New"><i>
systems with eight or fewer disks.</i></font><br><br><font size=2 color=blue face="Courier New"><i>
The <b>scatter</i></b><i> allocation
method provides</i></font><br><font size=2 color=blue face="Courier New"><i>
more consistent file system performance
by</i></font><br><font size=2 color=blue face="Courier New"><i>
averaging out performance variations
due to</i></font><br><font size=2 color=blue face="Courier New"><i>
block location (for many disk
subsystems, the</i></font><br><font size=2 color=blue face="Courier New"><i>
location of the data relative
to the disk edge</i></font><br><font size=2 color=blue face="Courier New"><i>
has a substantial effect on performance).
This</i></font><br><font size=2 color=blue face="Courier New"><i>
allocation method is appropriate
in most cases</i></font><br><font size=2 color=blue face="Courier New"><i>
and is the default for GPFS clusters
with more</i></font><br><font size=2 color=blue face="Courier New"><i>
than eight nodes or file systems
with more than</i></font><br><font size=2 color=blue face="Courier New"><i>
eight disks.</i></font><br><br><font size=2 color=blue face="Courier New"><i>
The block allocation map type cannot be changed</i></font><br><font size=2 color=blue face="Courier New"><i>
after the storage pool has been
created.</i></font><br><font size=2 color=blue face="Courier New"><i>..</i></font><br><font size=2 color=blue face="Courier New"><i>..</i></font><br><font size=2 face="Courier New"><b> </b></font><font size=2 color=blue face="Courier New"><b><i>-j</i></b><i><b>{cluster</i></b><i> <b>|</i></b><i> <b>scatter}</i></b></font><br><font size=2 color=blue face="Courier New"><i>
Specifies the default block allocation map type to be</i></font><br><font size=2 color=blue face="Courier New"><i>
used if <b>layoutMap</i></b><i> is not specified for a given</i></font><br><font size=2 color=blue face="Courier New"><i>
storage pool.</i></font><br><font size=2 face="sans-serif">[/snip from mmcrfs]</font><br><br><font size=2 face="sans-serif">My two cents,</font><br><font size=2 face="sans-serif">-Kums</font><br><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">Kenneth Waegeman <kenneth.waegeman@ugent.be></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">gpfsug main discussion
list <gpfsug-discuss@spectrumscale.org></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">05/04/2017 09:23 AM</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Re: [gpfsug-discuss]
bizarre performance behavior</font><br><font size=1 color=#5f5f5f face="sans-serif">Sent by:
</font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr noshade><br><br><br><font size=3>Hi,</font><p><font size=3>We found out using ib_read_bw and ib_write_bw that there
were some links between server and clients degraded, having a bandwith
of 350MB/s</font><p><font size=3>strangely, nsdperf did not report the same. It reported
12GB/s write and 9GB/s read, which was much more then we actually could
achieve.</font><p><font size=3>So the problem was some bad ib routing. We changed some
ib links, and then we got also 12GB/s read with nsdperf.</font><p><font size=3>On our clients we then are able to achieve the 7,2GB/s
in total we also saw using the nsd servers!</font><p><font size=3>Many thanks for the help !!</font><p><font size=3>We are now running some tests with different blocksizes
and parameters, because our backend storage is able to do more than the
7.2GB/s we get with GPFS (more like 14GB/s in total). I guess prefetchthreads
and nsdworkerthreads are the ones to look at?</font><p><font size=3>Cheers!</font><p><font size=3>Kenneth</font><p><font size=3>On 21/04/17 22:27, Kumaran Rajaram wrote:</font><br><font size=2 face="sans-serif">Hi Kenneth,</font><font size=3><br></font><font size=2 face="sans-serif"><br>As it was mentioned earlier, it will be good to first verify the raw network
performance between the NSD client and NSD server using the nsdperf tool
that is built with RDMA support.</font><font size=2 face="Courier New"><br>g++ -O2 -DRDMA -o nsdperf -lpthread -lrt -libverbs -lrdmacm nsdperf.C</font><font size=3><br></font><font size=2 face="sans-serif"><br>In addition, since you have 2 x NSD servers it will be good to perform
NSD client file-system performance test with just single NSD server
(mmshutdown the other server, assuming all the NSDs have primary, server
NSD server configured + Quorum will be intact when a NSD server is brought
down) to see if it helps to improve the read performance + if there are
variations in the file-system read bandwidth results between NSD_server#1
'active' vs. NSD_server #2 'active' (with other NSD server in GPFS "down"
state). If there is significant variation, it can help to isolate the issue
to particular NSD server (HW or IB issue?).</font><font size=3><br></font><font size=2 face="sans-serif"><br>You can issue "mmdiag --waiters" on NSD client as well as NSD
servers during your dd test, to verify if there are unsual long GPFS waiters.
In addition, you may issue Linux "perf top -z" command
on the GPFS node to see if there is high CPU usage by any particular
call/event (for e.g., If GPFS config parameter verbsRdmaMaxSendBytes has
been set to low value from the default 16M, then it can cause RDMA
completion threads to go CPU bound ). Please verify some performance scenarios
detailed in Chapter 22 in Spectrum Scale Problem Determination Guide (link
below).</font><font size=3><br></font><font size=3 color=blue><u><br></u></font><a href="https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/pdf/scale_pdg.pdf?view=kc"><font size=2 color=blue face="sans-serif"><u>https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/pdf/scale_pdg.pdf?view=kc</u></font></a><font size=3><br></font><font size=2 face="sans-serif"><br>Thanks,<br>-Kums </font><font size=3><br><br><br><br><br></font><font size=1 color=#5f5f5f face="sans-serif"><br>From: </font><font size=1 face="sans-serif">Kenneth
Waegeman </font><a href=mailto:kenneth.waegeman@ugent.be><font size=1 color=blue face="sans-serif"><u><kenneth.waegeman@ugent.be></u></font></a><font size=1 color=#5f5f5f face="sans-serif"><br>To: </font><font size=1 face="sans-serif">gpfsug
main discussion list </font><a href="mailto:gpfsug-discuss@spectrumscale.org"><font size=1 color=blue face="sans-serif"><u><gpfsug-discuss@spectrumscale.org></u></font></a><font size=1 color=#5f5f5f face="sans-serif"><br>Date: </font><font size=1 face="sans-serif">04/21/2017
11:43 AM</font><font size=1 color=#5f5f5f face="sans-serif"><br>Subject: </font><font size=1 face="sans-serif">Re:
[gpfsug-discuss] bizarre performance behavior</font><font size=1 color=#5f5f5f face="sans-serif"><br>Sent by: </font><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org"><font size=1 color=blue face="sans-serif"><u>gpfsug-discuss-bounces@spectrumscale.org</u></font></a><font size=3><br></font><hr noshade><font size=3><br><br><br>Hi, </font><p><font size=3>We already verified this on our nsds:</font><p><font size=3>[root@nsd00 ~]# /opt/dell/toolkit/bin/syscfg --QpiSpeed<br>QpiSpeed=maxdatarate<br>[root@nsd00 ~]# /opt/dell/toolkit/bin/syscfg --turbomode<br>turbomode=enable<br>[root@nsd00 ~]# /opt/dell/toolkit/bin/syscfg $B!>(B-SysProfile <br>SysProfile=perfoptimized</font><p><font size=3>so sadly this is not the issue.</font><p><font size=3>Also the output of the verbs commands look ok, there are
connections from the client to the nsds are there is data being read and
writen.</font><p><font size=3>Thanks again! </font><p><font size=3>Kenneth</font><p><font size=3><br>On 21/04/17 16:01, Kumaran Rajaram wrote:</font><font size=2 face="sans-serif"><br>Hi,<br><br>Try enabling the following in the BIOS of the NSD servers (screen shots
below) </font><ul><li><font size=2 face="sans-serif">Turbo Mode - Enable</font><li><font size=2 face="sans-serif">QPI Link Frequency - Max Performance</font><li><font size=2 face="sans-serif">Operating Mode - Maximum Performance</font><li><font size=2 face="Arial">>>>>While we have even better
performance with sequential reads on raw storage LUNS, using GPFS we can
only reach 1GB/s in total (each nsd server seems limited by 0,5GB/s) independent
of the number of clients </font><br><font size=2 face="Arial">>>We are testing from 2 testing machines
connected to the nsds with infiniband, verbs enabled.</font></ul><font size=2 face="sans-serif"><br>Also, It will be good to verify that all the GPFS nodes have Verbs RDMA
started using "mmfsadm test verbs status" and that the NSD client-server
communication from client to server during "dd" is actually using
Verbs RDMA using "mmfsadm test verbs conn" command (on
NSD client doing dd). If not, then GPFS might be using TCP/IP network over
which the cluster is configured impacting performance (If this is the case,
GPFS mmfs.log.latest for any Verbs RDMA related errors and resolve). </font><ul><li></ul><img src=cid:_1_0AF948300AF945C4007ADD8585258116 style="border:0px solid;"><font size=3><br></font><img src=cid:_1_0AF94A4C0AF945C4007ADD8585258116 style="border:0px solid;"><font size=3><br></font><img src=cid:_1_0AF94C940AF945C4007ADD8585258116 style="border:0px solid;"><font size=2 face="sans-serif"><br><br>Regards,<br>-Kums</font><font size=3><br><br><br><br><br></font><font size=1 color=#5f5f5f face="sans-serif"><br><br>From: </font><font size=1 face="sans-serif">"Knister,
Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP]" </font><a href=mailto:aaron.s.knister@nasa.gov></a><a href=mailto:aaron.s.knister@nasa.gov><font size=1 color=blue face="sans-serif"><u><aaron.s.knister@nasa.gov></u></font></a><font size=1 color=#5f5f5f face="sans-serif"><br>To: </font><font size=1 face="sans-serif">gpfsug
main discussion list </font><a href="mailto:gpfsug-discuss@spectrumscale.org"><font size=1 color=blue face="sans-serif"><u><gpfsug-discuss@spectrumscale.org></u></font></a><font size=1 color=#5f5f5f face="sans-serif"><br>Date: </font><font size=1 face="sans-serif">04/21/2017
09:11 AM</font><font size=1 color=#5f5f5f face="sans-serif"><br>Subject: </font><font size=1 face="sans-serif">Re:
[gpfsug-discuss] bizarre performance behavior</font><font size=1 color=#5f5f5f face="sans-serif"><br>Sent by: </font><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org"><font size=1 color=blue face="sans-serif"><u>gpfsug-discuss-bounces@spectrumscale.org</u></font></a><font size=3><br></font><hr noshade><font size=3><br><br><br>Fantastic news! It might also be worth running "cpupower monitor"
or "turbostat" on your NSD servers while you're running dd tests
from the clients to see what CPU frequency your cores are actually running
at. <br><br>A typical NSD server workload (especially with IB verbs and for reads)
can be pretty light on CPU which might not prompt your CPU crew governor
to up the frequency (which can affect throughout). If your frequency scaling
governor isn't kicking up the frequency of your CPUs I've seen that cause
this behavior in my testing. <br><br>-Aaron<br><br><br><br><br>On April 21, 2017 at 05:43:40 EDT, Kenneth Waegeman </font><a href=mailto:kenneth.waegeman@ugent.be></a><a href=mailto:kenneth.waegeman@ugent.be><font size=3 color=blue><u><kenneth.waegeman@ugent.be></u></font></a><font size=3>wrote:
</font><p><font size=3>Hi, </font><p><font size=3>We are running a test setup with 2 NSD Servers backed by
4 Dell Powervaults MD3460s. nsd00 is primary serving LUNS of controller
A of the 4 powervaults, nsd02 is primary serving LUNS of controller B.
</font><p><font size=3>We are testing from 2 testing machines connected to the
nsds with infiniband, verbs enabled.</font><p><font size=3>When we do dd from the NSD servers, we see indeed performance
going to 5.8GB/s for one nsd, 7.2GB/s for the two! So it looks like GPFS
is able to get the data at a decent speed. Since we can write from the
clients at a good speed, I didn't suspect the communication between clients
and nsds being the issue, especially since total performance stays the
same using 1 or multiple clients. <br><br>I'll use the nsdperf tool to see if we can find anything, <br><br>thanks!<br><br>K<br><br>On 20/04/17 17:04, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP]
wrote:<br>Interesting. Could you share a little more about your architecture? Is
it possible to mount the fs on an NSD server and do some dd's from the
fs on the NSD server? If that gives you decent performance perhaps try
NSDPERF next </font><a href="https://www.ibm.com/developerworks/community/wikis/home?lang=en#%21/wiki/General+Parallel+File+System+%28GPFS%29/page/Testing+network+performance+with+nsdperf"><font size=3 color=blue><u>https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General+Parallel+File+System+(GPFS)/page/Testing+network+performance+with+nsdperf</u></font></a><font size=3><br><br>-Aaron<br><br><br><br><br>On April 20, 2017 at 10:53:47 EDT, Kenneth Waegeman </font><a href=mailto:kenneth.waegeman@ugent.be></a><a href=mailto:kenneth.waegeman@ugent.be><font size=3 color=blue><u><kenneth.waegeman@ugent.be></u></font></a><font size=3>wrote:</font><p><font size=3>Hi,</font><p><font size=3>Having an issue that looks the same as this one: </font><p><font size=3>We can do sequential writes to the filesystem at 7,8 GB/s
total , which is the expected speed for our current storage <br>backend. While we have even better performance with sequential reads
on raw storage LUNS, using GPFS we can only reach 1GB/s in total (each
nsd server seems limited by 0,5GB/s) independent of the number of clients
<br>(1,2,4,..) or ways we tested (fio,dd). We played with blockdev params,
MaxMBps, PrefetchThreads, hyperthreading, c1e/cstates, .. as discussed
in this thread, but nothing seems to impact this read performance. </font><p><font size=3>Any ideas?</font><p><font size=3>Thanks!<br><br>Kenneth<br><br>On 17/02/17 19:29, Jan-Frode Myklebust wrote:<br>I just had a similar experience from a sandisk infiniflash system SAS-attached
to s single host. Gpfsperf reported 3,2 Gbyte/s for writes. and 250-300
Mbyte/s on sequential reads!! Random reads were on the order of 2 Gbyte/s.<br><br>After a bit head scratching snd fumbling around I found out that reducing
maxMBpS from 10000 to 100 fixed the problem! Digging further I found that
reducing prefetchThreads from default=72 to 32 also fixed it, while leaving
maxMBpS at 10000. Can now also read at 3,2 GByte/s.<br><br>Could something like this be the problem on your box as well?<br><br><br><br>-jf<br>fre. 17. feb. 2017 kl. 18.13 skrev Aaron Knister <</font><a href=mailto:aaron.s.knister@nasa.gov></a><a href=mailto:aaron.s.knister@nasa.gov><font size=3 color=blue><u>aaron.s.knister@nasa.gov</u></font></a><font size=3>>:<br>Well, I'm somewhat scrounging for hardware. This is in our test<br>environment :) And yep, it's got the 2U gpu-tray in it although even<br>without the riser it has 2 PCIe slots onboard (excluding the on-board<br>dual-port mezz card) so I think it would make a fine NSD server even<br>without the riser.<br><br>-Aaron<br><br>On 2/17/17 11:43 AM, Simon Thompson (Research Computing - IT Services)<br>wrote:<br>> Maybe its related to interrupt handlers somehow? You drive the load
up on one socket, you push all the interrupt handling to the other socket
where the fabric card is attached?<br>><br>> Dunno ... (Though I am intrigued you use idataplex nodes as NSD servers,
I assume its some 2U gpu-tray riser one or something !)<br>><br>> Simon<br>> ________________________________________<br>> From: </font><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target=_blank><font size=3 color=blue><u>gpfsug-discuss-bounces@spectrumscale.org</u></font></a><font size=3>[</font><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org"><font size=3 color=blue><u>gpfsug-discuss-bounces@spectrumscale.org</u></font></a><font size=3>]
on behalf of Aaron Knister [</font><a href=mailto:aaron.s.knister@nasa.gov><font size=3 color=blue><u>aaron.s.knister@nasa.gov</u></font></a><font size=3>]<br>> Sent: 17 February 2017 15:52<br>> To: gpfsug main discussion list<br>> Subject: [gpfsug-discuss] bizarre performance behavior<br>><br>> This is a good one. I've got an NSD server with 4x 16GB fibre<br>> connections coming in and 1x FDR10 and 1x QDR connection going out
to<br>> the clients. I was having a really hard time getting anything resembling<br>> sensible performance out of it (4-5Gb/s writes but maybe 1.2Gb/s for<br>> reads). The back-end is a DDN SFA12K and I *know* it can do better
than<br>> that.<br>><br>> I don't remember quite how I figured this out but simply by running<br>> "openssl speed -multi 16" on the nsd server to drive up
the load I saw<br>> an almost 4x performance jump which is pretty much goes against every<br>> sysadmin fiber in me (i.e. "drive up the cpu load with unrelated
crap to<br>> quadruple your i/o performance").<br>><br>> This feels like some type of C-states frequency scaling shenanigans
that<br>> I haven't quite ironed down yet. I booted the box with the following<br>> kernel parameters "intel_idle.max_cstate=0 processor.max_cstate=0"
which<br>> didn't seem to make much of a difference. I also tried setting the<br>> frequency governer to userspace and setting the minimum frequency
to<br>> 2.6ghz (it's a 2.6ghz cpu). None of that really matters-- I still
have<br>> to run something to drive up the CPU load and then performance improves.<br>><br>> I'm wondering if this could be an issue with the C1E state? I'm curious<br>> if anyone has seen anything like this. The node is a dx360 M4<br>> (Sandybridge) with 16 2.6GHz cores and 32GB of RAM.<br>><br>> -Aaron<br>><br>> --<br>> Aaron Knister<br>> NASA Center for Climate Simulation (Code 606.2)<br>> Goddard Space Flight Center<br>> (301) 286-2776<br>> _______________________________________________<br>> gpfsug-discuss mailing list<br>> gpfsug-discuss at </font><a href=http://spectrumscale.org/ target=_blank><font size=3 color=blue><u>spectrumscale.org</u></font></a><font size=3><br>> </font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target=_blank><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><font size=3><br>> _______________________________________________<br>> gpfsug-discuss mailing list<br>> gpfsug-discuss at </font><a href=http://spectrumscale.org/ target=_blank><font size=3 color=blue><u>spectrumscale.org</u></font></a><font size=3><br>> </font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target=_blank><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><font size=3><br>><br><br>--<br>Aaron Knister<br>NASA Center for Climate Simulation (Code 606.2)<br>Goddard Space Flight Center<br>(301) 286-2776<br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font><a href=http://spectrumscale.org/ target=_blank><font size=3 color=blue><u>spectrumscale.org</u></font></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target=_blank><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><font size=3><br></font><tt><font size=3><br><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org</font></tt><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><font size=3><br><br><br></font><tt><font size=3><br><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org</font></tt><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><tt><font size=2><br><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org</font></tt><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><font size=3><br><br></font><p><tt><font size=3><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org</font></tt><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><font size=3><br></font><tt><font size=2><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org</font></tt><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><font size=3><br><br><br></font><p><br><tt><font size=3>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><tt><font size=3><br></font></tt><br><tt><font size=2>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br></font></tt><br><BR>