On 11/08/2016 04:30 AM, Wang Xiaoguang wrote:
The current limit of number of asynchronous delalloc pages is (10 * SZ_1M). For 4K page, the total ram bytes would be 40G, very big value, I think in most cases, this limit will not work, here I set limit of the number of asynchronous delalloc pages to SZ_1M(4GB ram bytes). Signed-off-by: Wang Xiaoguang <wangxg.fnst@xxxxxxxxxxxxxx> --- fs/btrfs/inode.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 8e3a5a2..3a910f6 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1158,7 +1158,7 @@ static int cow_file_range_async(struct inode *inode, struct page *locked_page, struct btrfs_root *root = BTRFS_I(inode)->root; unsigned long nr_pages; u64 cur_end; - int limit = 10 * SZ_1M; + int limit = SZ_1M; clear_extent_bit(&BTRFS_I(inode)->io_tree, start, end, EXTENT_LOCKED, 1, 0, NULL, GFP_NOFS);
As Dave points out, I didn't use the right units for this, so even though we definitely waited on this limit while it was in development, that was probably a different bug.
Do you have a test case where the regular writeback throttling isn't enough to also throttle the async delalloc pages? It might be better to just delete the limit entirely.
-chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
