Many concurrent connections

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all.

I write a crawler for internet pages(async).
Input is IP;URL, so there is no DNS requests

When I create 1000 connections at start, It have ~1 percent of
connection errors,
but with 3000 connections I have ~10 percents errors.

And I start 16 processes like this (by number of cores).
So we have 3000 * 16 = 48000
And I have a channel with 2Gbit (bonding 4 of 1+1Gbit)

How can I decrease number of connection errors with huge number of
connections (up to 3000) ?
I also test without bonding and the results is about the same (number
of connection errors)

Using
linux 3.1.0-1-amd64

Tuning:
ulimit -n 655360

# tx queue len
ifconfig bond0 txqueuelen 2000

# memory for buffers
sysctl -w net.core.rmem_max=16777216
sysctl -w net.core.wmem_max=16777216
sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216"
sysctl -w net.ipv4.tcp_wmem="4096 65536 16777216"

# not cache ssthresh
sysctl -w net.ipv4.tcp_no_metrics_save=1

# backlog size
sysctl -w net.core.netdev_max_backlog=6000

# reset control
sysctl -w net.ipv4.tcp_congestion_control=htcp

# max orphans sockets
sysctl net.ipv4.tcp_max_orphans=$((262144 * 5)) # default was 262144,
and I have "Out of socket memory" error

# TIME_WAIT tune
sysctl net.ipv4.tcp_tw_reuse=1
sysctl net.ipv4.tcp_fin_timeout=3

-- 
Azat Khuzhin
--
To unsubscribe from this list: send the line "unsubscribe lartc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux