Re: PPPoE performance regression
On 06/10/12 10:32, David Woodhouse wrote:
On Sun, 2012-06-10 at 10:50 +1000, Nathan Williams wrote:When using iperf with UDP, we can get 20Mbps downstream, but only about 15Mbps throughput when using TCP on a short ADSL line (line sync at 25Mbps). Using iperf to send UDP traffic upstream at the same time doesn't affect the downstream rate.... I found the change responsible for the performance problem and rebuilt OpenWrt with the patch reversed on kernel 3.3.8 to confirm everything still works. So the TX buffer is getting full, which causes the netif queue to be stopped and restarted after some skbs have been freed?The *Ethernet* netif queue, yes. But not the PPP netif queue, I believe. I think the PPP code keeps just blindly calling dev_queue_xmit() and throwing away packets when they're not accepted.commit 137742cf9738f1b4784058ff79aec7ca85e769d4 Author: Karl Hiramoto <karl@xxxxxxxxxxxx> Date: Wed Sep 2 23:26:39 2009 -0700 atm/br2684: netif_stop_queue() when atm device busy and netif_wake_queue() when we can send packets again.Nice work; well done finding that. I've added Karl and DaveM, and the netdev@ list to Cc. (Btw, I assume the performance problem also goes away if you use PPPoA? I've made changes in the PPPoA code recently to *eliminate* excessive calls to netif_wake_queue(), and also to stop it from filling the ATM device queue. That was commit 9d02daf7 in 3.5-rc1, which is already in OpenWRT.) I was already looking vaguely at how we could limit the PPP queue depth for PPPoE and implement byte queue limits. Currently the PPP code just throws the packets at the Ethernet device and considers them 'gone', which is why it's hitting the ATM limits all the time. The patch you highlight is changing the behaviour in a case that should never *happen* with PPP. It's suffering massive queue bloat if it's filling the ATM queue, and we should fix *that*.
Agreed, the issue is the PPP layer. I've seen this issue with PPPoE before, haven't had the itch, time or interest to fix it though. A workaround to help mitigate the issue may be to increase the TX queue length of the br2684 interface, and the atm device if possible. You'll pay the price of buffer bloat and latency.
-- Karl -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
[Linux Kernel Discussion] [Ethernet Bridging] [Linux Wireless Networking] [Linux Bluetooth Networking] [Linux Networking Users] [VLAN] [Git] [IETF Annouce] [Linux Assembly] [Security] [Bugtraq] [Photo] [Singles Social Networking] [Yosemite Information] [MIPS Linux] [ARM Linux Kernel] [ARM Linux] [Linux Virtualization] [Linux Security] [Linux IDE] [Linux RAID] [Linux SCSI] [Free Dating]