[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Hi all, thank you all for the hints. I think I now know the cause of the problem - it is most probably caused by the nature of how our application communicates that does not go well with the mbuf allocation. Our application sends many (about 50/sec in my test setup, in praxis it varies wildly) short (about 60 byte) TCP packets and as the latency is more important than the throughput it does it on a socket with the TCP_NODELAY option. It looks the stack does not coalesce the data into clusters in this setup and allocates a separate mbuf for each such packet. Of course if the cable is plugged out or something this quickly leads to a mbuf pressure that is twice more than naively expected because of the overhead (128 byte mbuf used for 60 byte of data). Considering that the amount of data needed in the mbufs can be (I guess) at least number of connections times TCP send window this is not easy to manage in a system where 512 kB for network buffers is quite a luxury... I am now testing a patch allowing to tune a few parameters in the TCP/IP stack including the ones suggested by some of you (I know there is sysctl but not for all of them) and I hope I will be able to come up with the values satisfying our needs. The problem causing 'unrecoverable' mbufs overflows was a completely different beast - something (a mangled packet?) sometimes puts the ethernet controller of the LM3S9B90 processor into a state that always thinks it has a packet, returns bogus data when trying to read it and can be only fixed by resetting the controller. Oh well... Regards -- Stano -- Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss