On a system where I've been tinkering with linux-next for years, my / has got some errors. When migrating from 5.1 to 5.2, I saw these errors, but just ignored them and went back to 5.1, and continued my tinkering. Over the holidays, I decided to try to upgrade the kernel, saw the errors again, and decided to look into them, finding the tree-checker page on the kernel docs, and am writing this e-mail. My / does not contain anything sensitive or important, as /home and /usr/src are both subvolumes on a separate larger drive. btrfs fi show: Label: none uuid: 815266d6-a8b9-4f63-a593-02fde178263f Total devices 2 FS bytes used 93.81GiB devid 1 size 115.12GiB used 115.11GiB path /dev/nvme0n1p2 devid 3 size 115.12GiB used 115.11GiB path /dev/sda3 Label: none uuid: 4bd97711-b63c-40cb-bfa5-aa9c2868cf98 Total devices 1 FS bytes used 536.48GiB devid 1 size 834.63GiB used 833.59GiB path /dev/sda5 Booting a more recent kernel, I get spammed with: [ 8.243899] BTRFS critical (device nvme0n1p2): corrupt leaf: root=5 block=303629811712 slot=30 ino=5380870, invalid inode generation: has 13221446351398931016 expect [0, 2997884] [ 8.243902] BTRFS error (device nvme0n1p2): block=303629811712 read time tree block corruption detected There are 6 corrupted inodes: cat dmesg.foo | grep "BTRFS critical" | sed -re 's:.*block=([0-9]*).*ino=([0-9]+).*:\1 \2:' | sort | uniq 303629811712 5380870 303712501760 3277548 303861395456 5909140 304079065088 2228479 304573444096 3539224 305039556608 1442149 and they all have the same value for the inode generation. Before I reboot into a livecd and btrfs check --repair, is there anything interesting that a snapshot of the state would show? I have 800gb unpartitioned on the nvme, so backing up before is already in the plans, and I could just as easily grab an image of the partitions while I'm at it.
