In some case such as trigger_all_cpu_backtrace netpoll can wind up generating a lot of packets in hard irq context. My rough estimate is perhaps 1500 packets. That is larger than any driver tx ring, which makes netpoll_poll_dev necessary to transmit all of the netconsole packets immediately. Those 1500+ packets can take up a couple megabytes of memory if we aren't careful. On some machines that is enough to start depleting the polls GFP_ATOMIC can dig into, so netpoll needs to at a minimum to be able to reuse the memory for the skbs it has transmitted. Today this reclamation of transmitted packets happens in zap_completetion_queue as dev_kfree_skb_irq places all packets to be freed on a completion queue. netpoll then searches this queue for packets it thinks are freeable, and frees them. Unfortunately the current logic netpoll uses to decided a packet is freeable is incorrect and thus unsafe :( The logic netpoll uses to determine if a packet is freeable is to verify a skb does not have a destructor. Which works most of the time. But in pathological cases it can report that is a packet is freeable in hard irq context when it is not. This set of changes adds a function skb_irq_freeable and uses that function in zap_completion_queue to remove the bug, and in bowls of kfree_skb in skb_release_head_state to warn if we are inappropriate freeing a skb. While I don't expect this will allow anything except skbs sent by netpoll to be freed, solving the general problem rather than solving this for just packets generated by netpoll seems like a robust way of handling this. Eric W. Biederman (3): net: Add a test to see if a skb is freeable in irq context netpoll: Use skb_irq_freeable to make zap_completion_queue safe. net: Warn when a skb is freed inappropriately in hard irq context. include/linux/skbuff.h | 13 +++++++++++++ net/core/netpoll.c | 2 +- net/core/skbuff.c | 6 +++--- 3 files changed, 17 insertions(+), 4 deletions(-) Eric -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html