...making Linux just a little more fun!
[ In reference to "TCP and Linux' Pluggable Congestion Control Algorithms" in LG#135 ]
René Pfeiffer [lynx at luchs.at]
I forward this request since I asked for
----- Forwarded message from René Pfeiffer <lynx@luchs.at> -----
From: René Pfeiffer <lynx@luchs.at> Date: Sun, 30 Mar 2008 20:26:00 +0200 To: Erik van Zijst <erik.van.zijst@layerstream.com> Subject: Re: Scalable TCP Tuning Message-ID: <20080330182600.GC4927@nephtys.luchs.at>In-Reply-To: <47EF3432.9090307@layerstream.com> Hello, Erik!
I'll answer to in private, but please let me know if I can send this answer also to The Answer Gang mailing list. We like to keep all feedback there, so our readers can find it.
On Mar 29, 2008 at 2333 -0700, Erik van Zijst appeared and said:
> Hi Rene, > > I read some of your Linux Gazette articles, specifically the one on TCP's > pluggable congestion control and maybe you can give me a little push in the > right direction. > > I'm in a startup doing streaming video over TCP (rather than UDP with > forward error correction), but TCP is giving me some latency headaches. To > ensure uninterrupted playback, we use a chunky client-side playback buffer, > but in its relentless quest for throughput-optimization, often under high > bandwidth and transcontinental RTT's, TCP manages to introduce enough > latency to underrun any buffer. > > With streaming video, keeping latency within bounds is often more important > than squeezing out a few percent extra throughput. I've looked at the > pluggable congestion control algorithms which are great, but pretty much all > of them focus on high throughput, rather than latency.
Yes, most of the algorithms deal with increasing throughput on fat pipes and high latency. Only the algorithms TCP Veno and Westwood deal with other scenarios (frequent packet loss on wireless links). Apart from that maybe Interactive TCP (iTCP) seems to be interesting, but this isn't available as module (yet). http://www.medianet.kent.edu/itcp/main.html
> TCP maintains the dynamic send buffer between user- and kernel-space and in > order to minimize context switches, Linux seems to have a tendency to making > these really large. On high-latency, transcontinental connections, I often > get 1MB+ send buffers that can easily contain over 10 seconds of video. From > what I see, the kernel modules mostly seem to tune only the size of the cwnd > within the send buffer, rather than the send buffer as a whole, but since > this is probably the main cause for increased latency, I'm looking for a way > to tune this and always keep it as small as possible. Linux already seems to > increase the send buffer's capacity when the cwnd increases, but never seems > to shrink it again. > > Would you have any tips for a Linux-based startup when it comes to > low-latency TCP tuning?
The only thing I noticed are the following settings in /proc.
- /proc/sys/net/ipv4/tcp_low_latency controls if the data is forwarded directly through the TCP stack to the application buffer (=1) or not (=0). I have never benchmarked or compared this setting, thought it's always on on my laptop (as I noticed just now, I must have fiddled with sysctl.conf here). - The application keeps its own buffer, but you can also influence the maximum socket buffers of the TCP stack in the kernel. http://dsd.lbl.gov/TCP-tuning/linux.html describes the maximum size of send/receive buffers. You could try reducing this, but maybe you can't influence both sides of the connection.
That's all that I can think of right now, but be aware that this is isn't complete.
Best regards, René.