On 07/21/2012 02:42 AM, Zach Brown wrote:
>> + struct page *page;
>> + int i = 0;
>> + int nr = 0;
>
> i doesn't need to be initialized.
>
>> for (page_idx = 0; page_idx < nr_pages; page_idx++) {
>> - struct page *page = list_entry(pages->prev, struct page, lru);
>> + page = list_entry(pages->prev, struct page, lru);
>>
>> prefetchw(&page->flags);
>> list_del(&page->lru);
>> if (!add_to_page_cache_lru(page, mapping,
>> page->index, GFP_NOFS)) {
>> - __extent_read_full_page(tree, page, get_extent,
>> + page_cache_get(page);
>> + pagepool[nr++] = page;
>> + if (nr == 16) {
>
> ARRAY_SIZE(pagepool) instead of duplicating 16.
>
>> + for (i = 0; i < nr; i++) {
>> + __extent_read_full_page(tree,
>> + pagepool[i], get_extent,
>> &bio, 0, &bio_flags);
>> + page_cache_release(pagepool[i]);
>> + }
>> + nr = 0;
>> + }
>> }
>> page_cache_release(page);
>
> It looks like you can optimize away a page cache ref here. Don't add a
> ref when the page is added to the pool, instead use the existing ref.
> Then only release this ref here if add_to_page_cache_lru() succeeds.
> Then always release the ref when __extent_read_full_page is called on
> the pages in the pool.
>
Sounds good, it makes btrfs's readpages hook a little different with others' though.
> I'd also invert the nested if()s to reduce the painful indent level:
>
> if (add_to_page_cache_lru(page, mapping,
> page->index, GFP_NOFS)) {
> page_cache_release(page);
> continue;
> }
>
> pagepool[nr++] = page;
> if (nr < ARRAY_SIZE(pagepool))
> continue;
>
> for (i = 0; i < nr; i++) {
> __extent_read_full_page(tree, ...
>
> - z
>
I'll update it. Thanks Zach.
thanks,
liubo
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html