Re: [PATCH v2] btrfs: speedup mount time with readahead chunk tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




David Sterba 於 2020/7/9 下午5:13 寫道:
On Thu, Jul 09, 2020 at 10:38:42AM +0800, Robbie Ko wrote:
David Sterba 於 2020/7/9 上午5:11 寫道:
On Tue, Jul 07, 2020 at 09:25:11PM +0200, David Sterba wrote:
On Tue, Jul 07, 2020 at 11:59:44AM +0800, robbieko wrote:
This skips readahead of all nodes above level 1, if you find a nicer way
to readahead the whole tree I won't object, but for the first
implementation the level 1 seems ok to me.
Patch below, I tried to create large system chunk by fallocate on a
sparse loop device, but got only 1 node on level 1 so the readahead
cannot show off.

# btrfs fi df .
Data, single: total=59.83TiB, used=59.83TiB
System, single: total=36.00MiB, used=6.20MiB
Metadata, single: total=1.01GiB, used=91.78MiB
GlobalReserve, single: total=26.80MiB, used=0.00B

There were 395 leaf nodes that got read ahead, time between the first
and last is 0.83s and the block group tree read took about 40 seconds.
This was in a VM with file-backed images, and the loop device was
constructed from these devices so it's spinning rust.

I don't have results for non-prefetched mount to compare at the moment.

I think what you're doing is working.

But there are many similar problems that need to be improved.
Agreed, but this started from 'let's readahead chunk tree' and now we
drifted to fixing or perhaps making better use of readahead in several
other areas.

1. load_free_space_tree
We need to read all BTRFS_FREE_SPACE_BITMAP_KEY and
BTRFS_FREE_SPACE_EXTENT_KEY until the next FREE_SPACE_INFO_KEY.

2. populate_free_space_tree
We need to read all BTRFS_EXTENT_ITEM_KEY and BTRFS_METADATA_ITEM_KEY
until the end of the block group

3. btrfs_real_readdir
We need as many reads as possible (inode, BTRFS_DIR_INDEX_KEY).

4. btrfs_clone
We need as many reads as possible (inode, BTRFS_EXTENT_DATA_KEY).

5. btrfs_verify_dev_extents
We need to read all the BTRFS_DEV_EXTENT_KEYs.
Each case needs to be evaluated separately because the items live in
different trees and other item types could be scattered among the ones
we're interested in.

But the list gives an insight in what types of readahead we might need,
like full key range [from, to], or just all items of one key type.

6. caching_kthread (inode-map.c)
We need all the BTRFS_INODE_ITEM_KEY of fs_tree to build the inode map

For the above cases.
It is not possible to write a special readahead code for each case.
We have to provide a new readaread framework
Enable the caller to determine the scope of readaheads needed.
The possible parameters of the readahead are as follows
1. reada_maximum_nr : Read a maximum of several leaves at a time.
2. reada_max_key : READA_FORWARD Early Suspension Condition
3. reada_min_key : READA_BACK Abort condition ahead of time.
Yeah something like that.

We need to review all users of readahead to confirm that the The
behavior of readahead.
For example, in scrub_enumerate_chunks readahead has the effect of Very
small,
Because most of the time is spent on scrub_chunk,
The processing of scrub_chunk for all DEV_EXTENT in a leaf is A long time.
If the dev tree has been modified in the meantime, the previously
pre-reading leaf may be useless.
Yes that's another case where doing the readahead is useless.

So, now it's a question if we should start with the easy cases with
specific readahead and then unify them under a common API, or try to
figure out the API and then audit all us.

I'd be more in favor of the former as it allows to give us a baseline
where the readahead would be implemented optimally, the followup API
cleanup would need to keep the performance.

The latter is IMHO harder just because getting an API right on the first
try usually does not work.

Okay, I agree with you.
Please help submit your patch to speedup chunk tree read.
Thank you for your help.





[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux