[gpfsug-discuss] Spectrum Scale Ganesha NFS multi threaded AFM?

Tomer Perry TOMP at il.ibm.com
Sat Feb 22 09:35:32 GMT 2020


Hi,

Its implied in the tcp tuning suggestions ( as one needs bandwidth and 
latency in order to calculate the BDP).
The overall theory is documented in multiple places (tcp window, 
congestion control etc.) - nice place to start is 
https://en.wikipedia.org/wiki/TCP_tuning .
I tend to use this calculator in order to find out the right values 
https://www.switch.ch/network/tools/tcp_throughput/ 

The parallel IO and multiple mounts are on top of the above - not instead 
( even though it could be seen that it makes things better - but multiple 
of the small numbers we're getting initially).

Regards,

Tomer Perry
Scalable I/O Development (Spectrum Scale)
email: tomp at il.ibm.com
1 Azrieli Center, Tel Aviv 67021, Israel
Global Tel:    +1 720 3422758
Israel Tel:      +972 3 9188625
Mobile:         +972 52 2554625




From:   "Luis Bolinches" <luis.bolinches at fi.ibm.com>
To:     "gpfsug main discussion list" <gpfsug-discuss at spectrumscale.org>
Cc:     Jake Carrol <jake.carroll at uq.edu.au>
Date:   22/02/2020 07:56
Subject:        [EXTERNAL] Re: [gpfsug-discuss] Spectrum Scale Ganesha NFS 
multi threaded AFM?
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Hi

While I agree with what es already mention here and it is really spot on, 
I think Andi missed to reveal what is the latency between sites. Latency 
is as key if not more than ur pipe link speed to throughput results. 

--
Cheers

On 22. Feb 2020, at 3.08, Andrew Beattie <abeattie at au1.ibm.com> wrote:

Andi,

You may want to reach out to Jake Carrol at the University of Queensland,

When UQ first started exploring with AFM, and global AFM transfers they 
did extensive testing around tuning for the NFS stack.

>From memory they got to a point where they could pretty much saturate a 
10GBit link, but they had to do a lot of tuning to get there.

We are now effectively repeating the process, with AFM but using 100GB 
links, which brings about its own sets of interesting challenges.





Regards

Andrew

Sent from my iPhone

On 22 Feb 2020, at 09:32, Andi Christiansen <andi at christiansen.xxx> wrote:

Hi,

Thanks for answering!

Yes possible, I?m not too much into NFS and AFM so I might have used the 
wrong term..

I looked at what you suggested (very interesting reading) and setup 
multiple cache gateways to our home nfs server with the new 
afmParallelMount feature. It was as I suspected, for each gateway that 
does a write it gets 50-60MB/s bandwidth so although this utilizes more 
when adding it up (4 x gateways = 4 x 50-60MB/s) I?m still confused to why 
one server with one link cannot utilize more than the 50-60MB/s on 10Gb 
links ? Even 200-240MB/s is much slower than a regular 10Gbit interface.

Best Regards 
Andi Christiansen



Sendt fra min iPhone

Den 21. feb. 2020 kl. 18.25 skrev Tomer Perry <TOMP at il.ibm.com>:

Hi,

I believe the right term is not multithreaded, but rather multistream. NFS 
will submit multiple requests in parallel, but without using large enough 
window you won't be able to get much of each stream.
So, the first place to look is here: 
https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/bl1adm_tuningbothnfsclientnfsserver.htm 
- and while its talking about "Kernel NFS" the same apply to any TCP 
socket based communication ( including Ganesha). I tend to test the 
performance using iperf/nsdperf ( just make sure to use single stream) in 
order to see what is the expected maximum performance.
After that, you can start looking into "how can I get multiple streams?" - 
for that there are two options:
https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/bl1ins_paralleldatatransfersafm.htm

and
https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/b1lins_afmparalleldatatransferwithremotemounts.htm


The former enhance large file transfer, while the latter ( new in 5.0.4) 
will help with multiple small files as well.



Regards,

Tomer Perry
Scalable I/O Development (Spectrum Scale)
email: tomp at il.ibm.com
1 Azrieli Center, Tel Aviv 67021, Israel
Global Tel:    +1 720 3422758
Israel Tel:      +972 3 9188625
Mobile:         +972 52 2554625




From:        Andi Christiansen <andi at christiansen.xxx>
To:        "gpfsug-discuss at spectrumscale.org" <
gpfsug-discuss at spectrumscale.org>
Date:        21/02/2020 15:25
Subject:        [EXTERNAL] [gpfsug-discuss] Spectrum Scale Ganesha NFS 
multi threaded AFM?
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Hi all, 

i have searched the internet for a good time now with no answer to this.. 
So i hope someone can tell me if this is possible or not. 

We use NFS from our Cluster1 to a AFM enabled fileset on Cluster2. That is 
working as intended. But when AFM transfers files from one site to another 
it caps out at about 5-700Mbit/s which isnt impressive.. The sites are 
connected on 10Gbit links but the distance/round-trip is too far/high to 
use the NSD protocol with AFM. 

On the cluster where the fileset is exported we can only see 1 session 
against the client cluster, is there a way to either tune Ganesha or AFM 
to use more threads/sessions? 

We have about 7.7Gbit bandwidth between the sites from the 10Gbit links 
and with multiple NFS sessions we can reach the maximum bandwidth(each 
using about 50-60MBits per session). 

Best Regards 
Andi Christiansen _______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss





Ellei edellä ole toisin mainittu: / Unless stated otherwise above:
Oy IBM Finland Ab
PL 265, 00101 Helsinki, Finland
Business ID, Y-tunnus: 0195876-3 
Registered in Finland
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=vPbqr3ME98a_M4VrB5IPihvzTzG8CQUAuI0eR-kqXcs&s=kIM8S1pVtYFsFxXT3gGQ0DmcwRGBWS9IqtoYTtcahM8&e= 





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20200222/924a04bd/attachment-0002.htm>


More information about the gpfsug-discuss mailing list