On Mon, May 18, 2020 at 01:13:34PM +0800, Qu Wenruo wrote: > >> [ 119.624572] BTRFS info (device vdb): balance: start -d -m -s > >> [ 119.630843] BTRFS info (device vdb): relocating block group 30408704 flags metadata|dup > >> [ 119.640113] BTRFS critical (device vdb): corrupt leaf: root=18446744073709551607 block=298909696 slot=0, invalid key objectid: has 1 expect 6 or [256, 18446744073709551360] or 18446744073709551604 > >> [ 119.647511] BTRFS info (device vdb): leaf 298909696 gen 11 total ptrs 4 free space 15851 owner 18446744073709551607 > >> [ 119.652214] BTRFS info (device vdb): refs 3 lock (w:0 r:0 bw:0 br:0 sw:0 sr:0) lock_owner 0 current 19404 > >> [ 119.656275] item 0 key (1 1 0) itemoff 16123 itemsize 160 > >> [ 119.658436] inode generation 1 size 0 mode 100600 > > > > This is using 1 as ino number, which means root::highest_objectid is not > > properly initialized. > > > > This happened when I'm using btrfs_read_tree_root() other than > > btrfs_read_fs_root(), which initializes root::highest_objectid. > > After fetching the misc-next branch, that's exactly the problem. > > The 3rd patch is using the correct btrfs_get_fs_root() which won't > trigger the problem. I see, thanks. That the data reloc tree does not fit into the pattern of other trees being initialized in the function with btrfs_read_tree_root needs to be documented then.
