Re: Traffic shaping questions and possible extensions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Justin Schoeman wrote:

It has been quite a while since I looked at what was happening in Linux traffic shaping, so I am not sure if this has been discussed / improved on since I last looked.

We use a traffic shaper based on HTB. The basic principals work fine, but we have a problem with 'intermittent trafic' like http and interactive ssh sessions.

Each of these categories of traffic have their own class, and are allocated a certain 'guaranteed' rate. However, if other traffic is bursting into this bandwidth, we see that very often it takes so long for the other traffic to throttel back that the effective QoS is very bad.

If we hard cap the other traffic to leave the guarantee open, then web and ssh access is very very good.

So the problem seems to lie with getting other traffic to slow down quicker.

Are there any current solutions/suggestions to working around this?

If not, I have one possible solution, and I would appreciate any feedback on it:

At the moment, if traffic cannot be sent immediately (there is no bandwidth available for it), then it is first queued, and if the queue gets too long, packets are dropped.

This will slow down the sender, but relies on the expiry of TCP timers to acheive this.

What I was thinking was that for bulk traffic that needs (and can tolerate aggresive throtling), instead of queueing the packet, keep a history of the last ACK packet sent, and resend it.

The receiver will see this as a duplicate ack, and immediately enter a congestion avoidance algorithm, throtling the data.

Is this feasible, or is it a Really Stupid Idea (TM)?

I don't know if it would work as most tcp connections use SACK now, so that's what would be expected.

I assume you are only talking about ingress shaping for a slow(ish) line.

It's a pain - I used to have trouble on a 500kbit line, it's not so much of a problem now I have 7mbit :-)

Perhaps you could do better by being a bit more aggressive with queue lengths, maybe use sfq aswell. I don't think timers come into it, it's just that if you tail drop especially if the connections are still in slowstart then the bandwidth ramps up until the next packet after the dropped one gets dequeued, which is too late to stop the ISP buffer being filled.

Policers, though a bit inflexible don't suffer from this.
How long are your queues - maybe a bit too long, and how close to line rate are your rates, maybe too close so the queues only fill slowly.

When I had 500kbit I tried changing sfq to head drop - it seemed to help (though I have doubts about the hack every time I look at it).My aim was low latency, so I used netfilter connbytes to mark the first 80K IIRC of new connections and their bulk packets got sent to a class at 50% rate with the head dropping sfq with a limit of 12 - that got them out of slowstart fairly quickly. It still wasn't perfect for latency if there was established bulk aswell (which went to slightly longer per user head droppers at 80% ceil but lower prio than the new class), but allot better than tail drop. It didn't seem too bad for browsing - but then it was browsing with 4 concurrent new connections that caused me most problems WRT latency for games, so slower browsing was a price worth paying.

LARTC mailing list

[Bugtraq]     [Fedora Legacy]     [GCC Help]     [Yosemite News]     [Yosemite Photos]     [IP Tables]     [Netfilter Devel]     [Fedora Users]

Powered by Linux