Content-Length: 78171 | pFad | http://lwn.net/Articles/496509/

The CoDel queue management algorithm [LWN.net]
|
|
Subscribe / Log in / New account

The CoDel queue management algorithm

By Jonathan Corbet
May 9, 2012
"Bufferbloat" can be thought of as the buffering of too many packets in flight between two network end points, resulting in excessive delays and confusion of TCP's flow control algorithms. It may seem like a simple problem, but the simple solution—make buffers smaller—turns out not to work. A true solution to bufferbloat requires a deeper understanding of what is going on, combined with improved software across the net. A new paper from Kathleen Nichols and Van Jacobson provides some of that understanding and an algorithm for making things better—an algorithm that has been implemented first in Linux.

Your editor had a classic bufferbloat experience at a conference hotel last year. An attempt to copy a photograph to the LWN server (using scp) would consistently fail with a "response timeout" error. There was so much buffering in the path that scp was able to "send" the entire image before any of it had been received at the other end. The scp utility would then wait for a response from the remote end; that response would never come in time because most of the image had not, contrary to what scp thought, actually been transmitted. The solution was to use the -l option to slow down transmission to a rate closer to what the link could actually manage. With scp transmitting slower, it was able to come up with a more reasonable idea for when the data should be received by the remote end.

And that, of course, is the key to avoiding bufferbloat issues in general. A system transmitting packets onto the net should not be sending them more quickly than the slowest link on the path to the destination can handle them. TCP implementations are actually designed to figure out what the transmission rate should be and stick to it, but massive buffering defeats the algorithms used to determine that rate. One way around this problem is to force users to come up with a suitable rate manually, but that is not the sort of network experience most users want to have. It would be far better to find a solution that Just Works.

Part of that solution, according to Nichols and Jacobson, is a new algorithm called CoDel (for "controlled delay"). Before describing that algorithm, though, they make it clear that just making buffers smaller is not a real solution to the problem. Network buffers serve an important function: they absorb traffic spikes and equalize packet rates into and out of a system. A long packet queue is not necessarily a problem, especially during the startup phase of a network connection, but long queues as a steady state just add delays without improving throughput at all. The point of CoDel is to allow queues to grow when needed, but to try to keep the steady state at a reasonable level.

Various automated queue management algorithms have been tried over the years; they have tended to suffer from complexity and a need for manual configuration. Having to tweak parameters by hand was never a great solution even in ideal situations, but it fails completely in situations where the network load or link delay time can vary widely over time. Such situations are the norm on the contemporary Internet; as a result, there has been little use of automated queue management even in the face of obvious problems.

One of the key insights in the design of CoDel is that there is only one parameter that really matters: how long it takes a packet to make its way through the queue and be sent on toward its destination. And, in particular, CoDel is interested in the minimum delay time over a time interval of interest. If that minimum is too high, it indicates a standing backlog of packets in the queue that is never being cleared, and that, in turn, indicates that too much buffering is going on. So CoDel works by adding a timestamp to each packet as it is received and queued. When the packet reaches the head of the queue, the time spent in the queue is calculated; it is a simple calculation of a single value, with no locking required, so it will be fast.

Less time spent in queues is always better, but that time cannot always be zero. Built into CoDel is a maximum acceptable queue time, called target; if a packet's time in the queue exceeds this value, then the queue is deemed to be too long. But an overly-long queue is not, in itself, a problem, as long as the queue empties out again. CoDel defines a period (called interval) during which the time spent by packets in the queue should fall below target at least once; if that does not happen, CoDel will start dropping packets. Dropped packets are, of course, a signal to the sender that it needs to slow down, so, by dropping them, CoDel should cause a reduction in the rate of incoming packets, allowing the queue to drain. If the queue time remains above target, CoDel will drop progressively more packets. And that should be all it takes to keep queue lengths at reasonable values on a CoDel-managed node.

The target and interval parameters may seem out of place in an algorithm that is advertised as having no knobs in need of tweaking. What the authors have found, though, is that a target of 5ms and an interval of 100ms work well in just about any setting. The use of time values (rather than packet or byte counts) makes the algorithm function independently of the speed of the links it is managing, so there is no real need to adjust them. Of course, as they note, these are early results based mostly on simulations; what is needed now is experience using a functioning implementation on the real Internet.

That experience may not be long in coming, at least for some kinds of links; there is now a CoDel patch for Linux available thanks to Dave Täht and Eric Dumazet. This code is likely to find its way into the mainline fairly quickly; it will also be available in the CeroWrt router distribution. As the early CoDel implementation starts to see some real use, some shortcomings will doubtless be encountered and it may well lose some of its current simplicity. But it has every appearance of being an important component in the solution to the bufferbloat problem.

Of course, it's not the only component; the problem is more complex than that. There is still a need to look at buffer sizes throughout the stack; in many places, there is simply too much buffering in places where it can do no good. Wireless networking adds some interesting challenges of its own, with its quickly varying link speeds and complexities added by packet aggregation. There is also the little problem of getting updated software distributed across the net. So a full solution is still somewhat distant, but the understanding of the problem is clearly growing and some interesting approaches are beginning to appear.
Index entries for this article
KernelNetworking/Bufferbloat


to post comments

Ironical

Posted May 10, 2012 11:22 UTC (Thu) by renox (guest, #23785) [Link] (2 responses)

Anyone see the irony in having an algorithm included in a opensource kernel which is referenced in a paper only available on the (closed) ACM?

I wonder why there is a boycott against Elsevier but not against ACM..

Ironical

Posted May 10, 2012 14:00 UTC (Thu) by peter-b (subscriber, #66996) [Link] (1 responses)

Um, I can download the paper directly from the ACM just fine from the linked page, without logging in through any of the various academic networks that I have access to. I'm not entirely clear what you're complaining about.

Ironical

Posted May 10, 2012 19:29 UTC (Thu) by hmh (subscriber, #3838) [Link]

He doesn't know the difference between ACM and the ACM Queue, apparently. The Queue is not behind a pay-wall.

IMHO the irony is in the reasons why we never got to see the revised paper by VJ and KN back in 2006, finally disclosed in one of the comments of this blog post: http://gettys.wordpress.com/2012/05/08/fundamental-progre...

The CoDel queue management algorithm

Posted May 10, 2012 12:28 UTC (Thu) by cesarb (subscriber, #6266) [Link] (4 responses)

> Wireless networking adds some interesting challenges of its own, with its quickly varying link speeds

> The use of time values (rather than packet or byte counts) makes the algorithm function independently of the speed of the links it is managing, so there is no real need to adjust them.

If it works well independent of link speed, surely even variations in wireless link speed will not be a problem?

The CoDel queue management algorithm

Posted May 10, 2012 12:51 UTC (Thu) by Fowl (subscriber, #65667) [Link]

Possibly not for maximum throughput though. (TCP slow start and all that)

The CoDel queue management algorithm

Posted May 11, 2012 0:29 UTC (Fri) by mtaht (subscriber, #11087) [Link]

Most wireless drivers are already horribly overbuffered, and over-retry. this affects codel's estimates enormously. Codel does do a pretty good job, when it gets control of the queue, but it is sub-optimal. Substantial rework of the wireless subsystem appears required, at least at present.

The CoDel queue management algorithm

Posted May 13, 2012 21:29 UTC (Sun) by dlang (guest, #313) [Link] (1 responses)

An algorithm may work well with a wide variety of link speeds, as long as the link speed remains relatively stable over time.

but when the link speed changes too drastically in too short a time the queue size that was right for the old size is going to be too large (or too small) for the new rate.

The CoDel queue management algorithm

Posted May 19, 2012 7:23 UTC (Sat) by Tobu (subscriber, #24111) [Link]

Looking at figure 7 in the ACM queue article, showing a wireless simulation with varying bandwidth, a 50Mb/s->1Mb/s transition takes about 20s to get the delays back under 200ms. If there's an intermediate bandwidth stepping, resorbing the queue is quicker.

The CoDel queue management algorithm

Posted May 18, 2012 5:11 UTC (Fri) by slashdot (guest, #22014) [Link] (32 responses)

5ms is way too much, especially for non-backbone routers.

5ms over 20 hops leads to 100ms, which is bad for interactive uses such as multiplayer gaming (where you ideally want 5-10ms end-to-end roundtrip).

A more sensible default seems to be something like 10-50% of packet travel time over all outgoing links controlled by the queue (but never less than 0.01-0.1 ms).

The CoDel queue management algorithm

Posted May 18, 2012 9:02 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

I have around 200ms of delay over 17 hops (including the Atlantic Ocean
hop). My ping between US coasts is 55ms over 9 hops.

So 5ms per hop looks just in the right ballpark for a typical delay.

The CoDel queue management algorithm

Posted May 24, 2012 12:06 UTC (Thu) by gb (subscriber, #58328) [Link] (1 responses)

100 or 200 ms doesn't sound too large because diameter of earth is 3474 km while light speed is 300000 km/s, so best time you can get for half of earth in straight direction is 6ms =)

The CoDel queue management algorithm

Posted Jul 25, 2012 4:41 UTC (Wed) by scientes (guest, #83068) [Link]

I've always seen about a 200ms delay around the world, that have never been a place I have seen issues. The main issues comes that I've seen alot of dsl providers that have 50ms for the first hop, which is quite annoying.

The CoDel queue management algorithm

Posted May 28, 2012 15:33 UTC (Mon) by nye (guest, #51576) [Link] (25 responses)

>5ms over 20 hops leads to 100ms, which is bad for interactive uses such as multiplayer gaming (where you ideally want 5-10ms end-to-end roundtrip).

People seem to play multiplayer games over residential internet connections, where the end-to-end RTT simply cannot be below 30-40ms, and longer is common (15-20ms ping to the first hop is pretty much the best case scenario).

I don't know how you're achieving 5-10ms RTT (although I would be interested - leased line? FTTP?), but it's not representative of the vast majority of connections used for gaming.

The CoDel queue management algorithm

Posted May 28, 2012 18:13 UTC (Mon) by Jonno (subscriber, #49613) [Link] (22 responses)

I've got a pretty standard 100/10 mbps residential internet connection (at $23/month), and my ping times are not nearly that high.

First hop (actually two hops, as it goes through my Linux router):
64 bytes from xx.xxx.xx.1: icmp_req=1 ttl=255 time=0.731 ms

To server on national university backbone 270 km away:
64 bytes from vision.sunet.se (192.36.171.156): icmp_req=1 ttl=56 time=3.84 ms

To residential ADSL user with different ISP over an OpenVPN network:
64 bytes from 192.168.64.10: icmp_req=1 ttl=63 time=23.7 ms

The CoDel queue management algorithm

Posted May 28, 2012 22:47 UTC (Mon) by raven667 (subscriber, #5198) [Link] (21 responses)

I wouldn't say that's standard, 100Mbit is way at the top end of the range at $100/mo+, if it is available at all. 10/1 or 4/0.5 are much more common and only cost $30-$50. You must have some incredible infrastructure where you are at. Also my numbers match up with the poster, there is 10-20 ms of latency between my laptop and my first upstream hop. Maybe you are on fiber and don't know it?

The CoDel queue management algorithm

Posted May 29, 2012 1:59 UTC (Tue) by dlang (guest, #313) [Link]

there are still a LOT of people stuck with 1.5 or 3Mb down and 768Mb up. This is worlds better than dialup, but if developers are considering multiMb in each direction 'typical', it's no wonder that they think that having everything in the cloud will work (or that doing network OS installs should be the norm), and then wonder why they have so few users.

The CoDel queue management algorithm

Posted May 29, 2012 6:31 UTC (Tue) by Jonno (subscriber, #49613) [Link] (15 responses)

I've got an ethernet jack in the hallway labelled "Internet", which is quite typical in Swedish apartment complexes. 100/10 Mbps only costs $23/month, though a business line with bandwidth guarantees cost much more (I usually only get 60-80 Mbps down and 8-9 Mbps up, and occasionally only 30-40 Mbps down).

People living in houses rather than apartments are usually stuck with 20/1 Mbps ADSL, I know my dad pays $17/month for that speed, and his ping times are worse than mine. But when I ping him over our VPN tunnel it is still way below the 30-40 ms you quoted as a minimum.

While I know that some backwater nations, such as the US, have all but no working infrastructure, don't assume that is the norm, that is the exception.

The CoDel queue management algorithm

Posted May 29, 2012 7:46 UTC (Tue) by spaetz (guest, #32870) [Link] (14 responses)

> While I know that some backwater nations, such as the US, have all but no working infrastructure, don't assume that is the norm

Let me assure you that this is *not* the norm even in EU countries (can't speak for Asia). If you are below 20-30ms ping time for the first hop, you are the exception. (perhaps not in Sweden, but in Europe for sure).

And there is no need to insult countries here.

The CoDel queue management algorithm

Posted May 29, 2012 8:47 UTC (Tue) by dlang (guest, #313) [Link] (3 responses)

the simple truth is that it's far easier to wire a small country than it is to wire a large county.

If you have a high population density you also have the ability to spread the cost of doing so across many more people.

and frankly, it helps to be late to the party as you only have to implement the latest and best technology, not each generation as it is developed (never mind paying the cost of the development)

the result is that in small, high population density countries you can have really good infrastructure, but in larger areas you aren't going to have nearly as goon an infrastructure, not matter what the cost.

I live in the greater Los Angeles area, but out around the edge of it. I pay $130/month for 1.5Mb down/768Kb up. I could upgrade to an ethernet connection up to 5Mb, but then it would cost me $100 per Mb.

and I'm in a 'good' (but not 'great') area for connectivity. Where my sister lives, their only option is satellite (with a ~1000 ms first hop ping time) to give you an idea of how remote they are, they literally live 20 miles from the nearest fast food place.

now there are places in the US with connectivity almost as good as what you get (although at higher out-of-pocket costs, I'm assuming that your system has some tax money included in it), and I wouldn't lay a bet either way on if there is more area covered with such good coverage in the US vs in Sweden, but it could be several multiples larger in the US and not make a significant dent in the problem.

The CoDel queue management algorithm

Posted May 29, 2012 13:46 UTC (Tue) by Jonno (subscriber, #49613) [Link]

> the simple truth is that it's far easier to wire a small country than it
> is to wire a large county.

Sweden may only be 5% of the US, but it's still larger than any US state but Texas and Alaska. In a more fair country-to-country comparison, Sweden is larger than for example Germany, Italy and the United Kingdoms.

> If you have a high population density you also have the ability to
> spread the cost of doing so across many more people.

Well, Sweden has 20.6 residents/km², while US has 33.7 residents/km², so obviously US should have much better Internet connection than Sweden...

> and frankly, it helps to be late to the party as you only have to
> implement the latest and best technology, not each generation as it
> is developed (never mind paying the cost of the development)

Well, the Swedish broadband infrastructure project began back in 1998, and for the last few years people have started to complain that next-to-nothing have happened for over 5 years. While most of Sweden have been upgraded to ethernet, fiber or cable connections over the years, 35% of all Swedish households are still limited to the previous generation broadband access, because the goverment stopped subsidizing broadband infrastructure projects once ADSL was deployed everywhere (well, 99.91% of households) back in 2005.

Yes, ADSL is considered previous generation broadband access in Sweden, even though base stations have been upgraded to support 24/1 Mbps compared to the 8/1 Mbps that was common in 2005.

The current generation of broadband access (usually an ethernet jack, sometimes a cable-tv modem) started to roll out in 2000, but wasn't common until 2005, when it reached 40% coverage. Today that figure is 65%. Personally, I got a "real" broadband connection in 2002, albeit back then the speed was only 10/2 Mbps and it cost $28 per month. I got my current 100/10 Mbps connection when I moved in 2004, though at the time it was quite expensive at $45 per month.

> I live in the greater Los Angeles area, but out around the edge of it.
> I pay $130/month for 1.5Mb down/768Kb up. I could upgrade to an ethernet
> connection up to 5Mb, but then it would cost me $100 per Mb.

Poor soul, for that kind of money ($126) I could get 250/100 Mbps. Of course, that is because I live in an apartment complex in the middle of a medium-sized town. Most rural residents can't get anything better than ADSL at 24/1 Mbps, and will have to pay $49 to get even that much.

> I'm assuming that your system has some tax money included in it

Well, 2002 through 2005 the government subsidized about half the cost of all broadband infrastructure project, but since then they have only subsidized rural broadband projects, and usually only in the form of targeted low interest loans.

The CoDel queue management algorithm

Posted May 30, 2012 13:50 UTC (Wed) by job (guest, #670) [Link]

Higher population density? Late to the party? Please try to get your facts straight next time if you're going to waste electrons on a post.

The CoDel queue management algorithm

Posted May 30, 2012 14:13 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

In Ukraine no tax money go into ISP business. Indeed, competition is fierce.

Yet we get 100Mb for $8 a month in cities.

The CoDel queue management algorithm

Posted May 29, 2012 22:05 UTC (Tue) by nix (subscriber, #2304) [Link] (9 responses)

Quite. I *might* be able to get this in the UK with fibre to the premises (rare as hen's teeth), but fibre to the cabinet couldn't do it and my existing bonded ADSL connection to an exchange <100m away has no hope:

1 spindle.srvr.nix (192.168.16.15) 0.056 ms 0.040 ms 0.063 ms
2 fold.srvr.nix (192.168.14.1) 0.341 ms 0.317 ms 0.297 ms
3 [ADSL router] 1.688 ms 1.685 ms 1.674 ms
4 c.gormless.thn.aa.net.uk (90.155.53.53) 17.026 ms 24.144 ms 27.475 ms
5 b.aimless.thn.aa.net.uk (90.155.53.42) 26.062 ms 33.940 ms 33.919 ms

(as usual, the timing figures from the last two hops are upper bounds because those machines are going to be delaying ICMP replies as they see fit: still, it's in the 20ms range, not the 3ms range).

The CoDel queue management algorithm

Posted May 30, 2012 10:08 UTC (Wed) by nye (guest, #51576) [Link] (8 responses)

>my existing bonded ADSL connection

Now we're *really* OT, but what are you using to do the bonding?

I was hoping to do that with a couple of Sangoma PCI ADSL2 modems, but in the small print it turned out that you can only do bonding with them if you're using PPPoE, and I don't know of any UK ISPs that offer that.

I've seen ADSL cards that can do it, but at that point you're better of just going to ADLS2 unless you're doing 4-way bonding or more, and I've seen expensive proprietary boxes that just sort it all out for you, but nothing affordable for an individual or small business.

The CoDel queue management algorithm

Posted May 30, 2012 10:46 UTC (Wed) by TomH (subscriber, #56149) [Link] (6 responses)

Well http://compton.nu/2009/12/per-packet-load-balancing-with-... explains how I bonded four ADSL links with the same ISP.

We're bonding two fibre to the cabinet lines instead now, which use PPPoE and makes things a bit simpler as they are terminated with pppd on the linux box where they can be bonded with teql.

In fact the BT Wholesale ADSL platform does support PPPoE but most ISPs don't tell you that... I did have my line at home running that way for a short while though, in preparation for upgrading it to fibre to the cabinet.

The CoDel queue management algorithm

Posted May 30, 2012 14:32 UTC (Wed) by nye (guest, #51576) [Link] (2 responses)

>Well http://compton.nu/2009/12/per-packet-load-balancing-with-... explains how I bonded four ADSL links with the same ISP.

Thank you for that link. I hope nobody minds the thread-hijacking to ask a little more about this - it's not a topic that seems to have much Google juice.

With that setup, do I understand correctly that - assuming the ISP has the bonding set up on their end - configuring the TEQL interface as you have will work regardless of the connection method used? It looks like it's actually very simple so long as you know the secret sauce.

What I'm not sure about is the IP addresses on the routers. You say that they need to support two different *LAN* addresses; how are the WAN ports configured? Are they bridged or do they need an additional address?

Say for the sake of example that you have a /29 netblock, giving you 6 addresses once you've accounted for the network and broadcast addresses. You use one that's shared between the routers and one for the teql0 interface, leaving 4 available for other machines. Is that correct?

Incidentally, would you recommend the Zyxel P-660 series? I do wonder if a better modem/router might solve my periodic loss of ADSL synchronisation, and while I've tried two different models, it's entirely plausible that they're both fairly intolerant of noisy lines.

The CoDel queue management algorithm

Posted May 30, 2012 14:50 UTC (Wed) by TomH (subscriber, #56149) [Link] (1 responses)

Those routers allow the LAN interface to be given up to two aliases in addition to their primary address.

So what I did was to set the primary address to a unique RFC1918 address, which was just used for management purposes when I needed to telnet to a specific router, and then to set the alias on each router to the same, shared, public address.

I then added static routes on each router to pass traffic for our public IPs back to the linux box where the bonding was done.

The WAN ports were configured with separate public addresses - a unique one for each router.

Bridge mode wasn't used - they were acting as normal routers.

I wouldn't like to say if the P660 is particularly good or bad - they were the free routers our ISP provided with the lines.

The CoDel queue management algorithm

Posted May 30, 2012 15:16 UTC (Wed) by nye (guest, #51576) [Link]

>The WAN ports were configured with separate public addresses - a unique one for each router.

Interesting, so this sounds like a different configuration than some bonding setups which appear to require only a single public IP address. I wonder if that's down to how the ISP configures their interfaces.

I suspect I could make a lot more progress here if experimentation didn't mean scheduling connection downtime of an unknown duration, which in practice means being physically present in a locked office building in the dead of night (for which honestly they Do Not Pay Me Enough™).

At any rate, thanks for the information.

The CoDel queue management algorithm

Posted May 31, 2012 11:21 UTC (Thu) by nix (subscriber, #2304) [Link] (2 responses)

The question really is, will any respondents to this question be using any ISP *other* than AAISP? They really do seem to be the only people who've tried to make line bonding work in any meaningful way, in the UK at least.

The CoDel queue management algorithm

Posted May 31, 2012 13:26 UTC (Thu) by james (subscriber, #1325) [Link] (1 responses)

This is going somewhat off-topic, but Eclipse do, although I haven't used it.

The CoDel queue management algorithm

Posted May 31, 2012 15:03 UTC (Thu) by nye (guest, #51576) [Link]

Entanet also offer a bonded option, and they're one of the wholesalers used by UKFSN.

The CoDel queue management algorithm

Posted May 31, 2012 11:20 UTC (Thu) by nix (subscriber, #2304) [Link]

All I'm using to bond is a multihop default route

ip route add default nexthop via 81.187.191.130 dev adsl weight 1 nexthop via 81.187.191.132 dev bdsl weight 1

to Zyxel P-660R ADSL routers that have no idea I'm bonding them, and two of Julian Anastasov's patches from http://www.ssi.bg/~ja/, to wit 00_static_routes-2.6.39-15.diff and 01_alt_routes-3.0-12.diff, to ensure that when one hop's routes go stale because of upstream problems we switch to the other. (I can't rely on normal 'when the link goes dead' bonding-driver stuff because the link that would go dead comes out of the router. I could fix this by using bridging, but Zyxel's documentation for that setup is so appalling that I haven't tried to make that work yet.)

However, I am lucky in that my ISP (AAISP) provides direct support for bonding on the ISP end: any packets sent to my public IPv4 or IPv6 address ranges will end up being evenly scattered between my lines (which fortunately are of similar speed, see the weight above). So all I have to handle is outbound routing.

The CoDel queue management algorithm

Posted May 29, 2012 9:13 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

>cyberax@cybnb:~$ traceroute lwsrv
>traceroute to lwsrv (94.45.58.146), 30 hops max, 60 byte packets
> 1 192.168.18.100 (192.168.18.100) 3.236 ms 3.237 ms 3.247 ms
> 2 unallocated.sta.lan.com.ua (92.249.102.1) 3.256 ms 3.263 ms 3.272 ms
> 3 out.ua-ix.lan.com.ua (92.249.120.249) 45.030 ms 45.046 ms 45.021 ms
> 4 dtel-ix.as6723.net (193.25.180.61) 3.197 ms 5.642 ms 5.714 ms
> 5 j2-ua.gw.skif.com.ua (91.90.19.210) 6.323 ms 6.332 ms 8.713 ms
> 6 something.com.ua (94.45.x.x) 3.012 ms 5.023 ms 17.928 ms

That's a trace from my residential 100Mb to my office (some 30km away). And I pay $6 a month for it.

You Americans are stuck in a stone age :)

The CoDel queue management algorithm

Posted May 29, 2012 15:06 UTC (Tue) by nye (guest, #51576) [Link] (2 responses)

Well I'm from the UK, so my national bias is to assume that 'residential' == 'ADSL', where I don't believe it's technically possible to get pings that low. In some places you can get cable (with a choice of Virgin Media or... Virgin Media) and while I don't know so much about that, by reputation the ping times are at least as bad, often worse.

Over the last couple of years we've been seeing a major upgrade rollout, which means that people in urban areas have a fairly good chance of getting VDSL2, though there's only one wholesale provider and IIUC everyone offering it is reselling exactly the same product, which has fairly low caps until you start paying crazy money (eg £25-30/month for a 30GB limit, or ~£75 if you go all the way up to 180GB).

I don't actually know if VDSL2 is any better in terms of latency though, quite possibly not. Personally I'm relatively happy with my un-capped ~10Mb ADSL2 connection for £18/month, that usually only drops out for a couple of minutes at a time a few times per day (emphasis on 'relatively'). The national average is far worse than that, but we're only about 450m from the exchange.

Do you know what technology you're using?

The CoDel queue management algorithm

Posted May 29, 2012 15:23 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

Sure. I live in apartment building and get Ethernet to my router (actually, to my Linux machine), with simple static IP assignment (no DHCP, PPP, etc). I can also get routed networks up to /28 if I need it for small additional price.

Apartment building is connected by 10G fiber to my provider's hub which is connected to the main national Internet traffic exchange ( http://www.ua-ix.net.ua/eng.phtml ). It's simple and fast, and works great in dense residential areas.

In contrast, I have to pay $50 a month for 30/5 cable connection when I live in Brooklyn in the US. And it's still considered to be pretty cheap :(

The CoDel queue management algorithm

Posted Jul 25, 2012 12:13 UTC (Wed) by farnz (subscriber, #17727) [Link]

All DSL technologies in current use run at a 4 kilobaud signalling rate; higher speeds are provided by running more bits per symbol, so your 10MBit/s ADSL2 is 2,500 bits per symbol, while 40M VDSL2 would be 10,000 bits per symbol.

This puts a lower bound on DSL latency of 0.25 milliseconds; practically, VDSL2 seems able to reliably reach the 5ms RTT range for the DSL link itself, while my home ADSL2+ has around 25ms RTT.

The CoDel queue management algorithm

Posted May 30, 2012 14:02 UTC (Wed) by job (guest, #670) [Link] (1 responses)

Any hardcore gamer would tell you a 100ms connection is pretty detrimental to your game performance (hence LAN parties).

I have a few VoIP installations for small businesses, and I generally don't want to do them if I can't keep latency under 50ms. It starts to get very annoying before 100ms (as in the "satellite link" effect).

That's why I think queue management algorithms should not be tuned for overused DSL links. While that may be relevant to those who are stuck with them, the buffers in your DSL equipment will kill your latency anyway. Already LTE connections have latency under ten milliseconds if you can avoid keeping them filled.

I agree 10 ms per hop sounds like almost an order of magnitude off. The people for which that is relevant won't be helped by this anyway.

The CoDel queue management algorithm

Posted May 30, 2012 14:55 UTC (Wed) by mtaht (subscriber, #11087) [Link]

fq_codel can cut 'sparse streams', such as voip, and gaming packets,
to absurdly low delays.

codel + qfq can do even better.

The 5ms target of the overall codel algorithm is just that - a target.
Sometimes it's more, usually it's less, and with fair queuing (fq_codel) applied on top...

it can be MUCH less, especially where you need it. As in sub 1ms. Which should make a lot of gamers and voip/videoconferencing people very happy.

Also, for 10gigE installations people have been using 500us as the target,
which seems to work well. I wouldn't recommend changing the target delay for anything else at the present time.

Rather than theorize can I merely recommend people try this stuff out for themselves and ask questions on the codel at lists.bufferbloat.net.

The CoDel queue management algorithm

Posted Jun 29, 2012 11:34 UTC (Fri) by Lennie (subscriber, #49641) [Link] (2 responses)

I was thinking about this again, but it is only the slowest link that will need to buffer. And possibly some congested links.

If they are all congested links, your gaming would suffer anyway, because the alternative to your packets getting buffered is that they get dropped.

So if every link is congested, you'd loose a lot of the packets anyway.

The CoDel queue management algorithm

Posted Jun 30, 2012 8:43 UTC (Sat) by dlang (guest, #313) [Link] (1 responses)

There are many cases where packets getting dropped _is_ better than packets getting buffered.

One really popular application where this is true is your telephone. The phone system backbone has been digital for years (ATM was invented specifically for phone use), and with real-time audio you really do want to have packets dropped if you can't keep up rather than buffering and falling behind.

The CoDel queue management algorithm

Posted Jun 30, 2012 11:38 UTC (Sat) by Lennie (subscriber, #49641) [Link]

I'm not saying that it's bad that the packets get droppped, actually I totally agree.

TCP was designed around the idea that packets get dropped (and in a timely fashion ! that is what the bufferbloat problem is all about after all).

The CoDel queue management algorithm

Posted Jul 22, 2012 16:33 UTC (Sun) by forthy (guest, #1525) [Link] (1 responses)

I've been working on better flow control for most of this year, as part of my net2o project. Minimizing delay is only one piece of the picture, and of course, it is necessary. Using an experimental protocol instead of TCP gives the benefit that you can do acquire whatever data you need (timestamps, mostly).

I've figured out that I need to do a bit more. First of all, I try to measure the achievable bandwidth by sending short bursts of packets out, and let the receiver time them. From that timing (burst start to end), the receiver can calculate a transmission rate, and communicate that back to the sender. This is important to quickly adapt to changing conditions, as we have them in our networds now - especially WLANs, where the achievable rate can drop and raise quickly, as we move our mobile equipment around.

This measurement would be good enough for competing data streams if the routers would do fair queuing in the buffer instead of the FIFO they do (fair means: round-robin between different connections). The FIFO nature however means that I need to measure a bit more - I've also to take into account what happens with the burst heads. If they get delayed more, it's too fast. The calculation here is: Take the additional delay between bursts, and multiply it by the number of bursts in flight at the current (too fast) rate. Wait this time, before you send the next burst.

This works quite well most of the time. But if there is more delay even though, I have added an exponential slow-down. The parameter for this is not fixed, it's derived from the measured maximum vs. minimum delay. As correctly stated in the article, "tuning" by the user is no good. For a gigabit LAN, 100ms are already excessive. The rule of thumb is that the buffer should capture a round trip delay worth of data - but when you compete with buffer-bloat TCP streams, it can be more. For the LinuxTag WLAN (worst case I've seen so far), 2s were quite right. Hope it's better next year, when the majority uses Linux 3.5 or later ;-).

BTW: People designed these buffers around Windows XP, which has no window scaling, i.e. the maximum "in flight" data are 64kB. That's what I now assume as save guess for minimal buffer size, so a net2o connection initially sends out 64kB quite fast, and then waits for the response.

The CoDel queue management algorithm

Posted Jul 23, 2012 1:35 UTC (Mon) by dlang (guest, #313) [Link]

the biggest problem I see with your approach is dealing with drastic bandwidth changes (up or down) after your initial measurements take place. That's what the killer is today. If bandwith is stable, or even only changing gradually, the traditional algorithms (with sane buffer sizes) or CoDel will handle things just fine.


Copyright © 2012, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds









ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://lwn.net/Articles/496509/

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy