Re: [PATCH] ixgbe: fix truesize calculation when merging active tail into lro skb

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

On 02/14/2012 09:39 AM, Eric Dumazet wrote:
> Le mardi 14 février 2012 à 09:21 -0800, Alexander Duyck a écrit :
>
>> The code itself is correct, but the comment isn't.  This code path is
>> applied only to the case where we are not using pages.  The default Rx
>> buffer size is actually about 3K when RSC is in use, which means
>> truesize is about 4.25K per buffer.
>>
> Hmm... any reason its not 2.25K per buffer ? (assuming MTU=1500)
>
> Do you really need this code in ixgbe_set_rx_buffer_len() ?
>
>                 /*
>                  * Make best use of allocation by using all but 1K of a
>                  * power of 2 allocation that will be used for skb->head.
>                  */
>                 else if (max_frame <= IXGBE_RXBUFFER_3K)
>                         rx_buf_len = IXGBE_RXBUFFER_3K;
>                 else if (max_frame <= IXGBE_RXBUFFER_7K)
>                         rx_buf_len = IXGBE_RXBUFFER_7K;
>                 else if (max_frame <= IXGBE_RXBUFFER_15K)
>                         rx_buf_len = IXGBE_RXBUFFER_15K;
>                 else
>                         rx_buf_len = IXGBE_MAX_RXBUFFER;
>
> Why not using :
> 		rx_buf_len = max_frame;
>
> and let kmalloc() do its best ?

The reason for all of this is receive side coalescing.  RSC causes us to
do full buffer size DMAs even if the max frame size is less than the Rx
buffer length.  If RSC is disabled via the NETIF_F_LRO flag then the
default will drop to a 1522 buffer allocation size, and kmalloc can do a
2K allocation.

If I am not mistaken, kmalloc only allocates power of 2 sized blocks for
anything over 256 bytes.  I made the above code change a little while
back when I realized that when RSC was enabled we were setting up a 2K
buffer, which after adding padding and skb_shared_info was 2.375K
resulting in a 4K allocation.  After see that I decided it was better
for us to set the buffer size to 3K which reduced RSC descriptor
processing overhead for the standard case by 50%, and made use of 1K of
the wasted space.

I already have patches in the works that will do away with all of this
code pretty soon anyway, and replace it all with something similar to
our page based packet split path.  It will also end up doing away with
the current RSC code since page based receives end up not needing to be
queued as we are just adding pages to frags.

Thanks,

Alex 

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Linux Kernel Discussion]     [Ethernet Bridging]     [Linux Wireless Networking]     [Linux Bluetooth Networking]     [Linux Networking Users]     [VLAN]     [Git]     [IETF Annouce]     [Linux Assembly]     [Security]     [Bugtraq]     [Photo]     [Singles Social Networking]     [Yosemite Information]     [MIPS Linux]     [ARM Linux Kernel]     [ARM Linux]     [Linux Virtualization]     [Linux Security]     [Linux IDE]     [Linux RAID]     [Linux SCSI]     [Free Dating]

Add to Google Powered by Linux