Re: first it froze, now the (btrfs) root fs won't mount ...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2019-10-21 06:47, Christian Pernegger wrote:
[Please CC me, I'm not on the list.]

Am So., 20. Okt. 2019 um 12:28 Uhr schrieb Qu Wenruo <quwenruo.btrfs@xxxxxxx>:
Question: Can I work with the mounted backup image on the machine that
also contains the original disc? I vaguely recall something about
btrfs really not liking clones.

If your fs only contains one device (single fs on single device), then
you should be mostly fine. [...] mostly OK.

Should? Mostly? What a nightmare-inducing, yet pleasantly Adams-esqe
way of putting things ... :-)

Anyway, I have an image of the whole disk on a server now and am
feeling all the more adventurous for it. (The first try failed a
couple of MB from completion due to spurious network issues, which is
why I've taken so long to reply.)
I've done stuff like this dozens of times on single-device volumes with exactly zero issues. The only time you're likely to see problems is if the kernel thinks (either correctly or incorrectly) that the volume should consist of multiple devices.

Ultimately, the issue is that the kernel tries to use all devices it knows of with the same volume UUID when you mount the volume, without validating the number of devices and that there are no duplicate device UUID's in the volume, so it can accidentally pull in multiple instances of the same 'device' when mounting.

You wouldn't happen to know of a [suitable] bootable rescue image [...]?

Archlinux iso at least has the latest btrfs-progs.

I'm on the Ubuntu 19.10 live CD (btrfs-progs 5.2.1, kernel 5.3.0)
until further notice. Exploring other options (incl. running your
rescue kernel on another machine and serving the disk via nbd) in
parallel.

I'd recommend the following safer methods before trying --init-extent-tree:

- Dump backup roots first:
   # btrfs ins dump-super -f <dev> | grep backup_treee_root
   Then grab all big numbers.

# btrfs inspect-internal dump-super -f /dev/nvme0n1p2 | grep backup_tree_root
backup_tree_root:    284041969664    gen: 58600    level: 1
backup_tree_root:    284041953280    gen: 58601    level: 1
backup_tree_root:    284042706944    gen: 58602    level: 1
backup_tree_root:    284045410304    gen: 58603    level: 1

- Try backup_extent_root numbers in btrfs check first
   # btrfs check -r <above big number> <dev>
   Use the number with highest generation first.

Assuming backup_extent_root == backup_tree_root ...

# btrfs check --tree-root 284045410304 /dev/nvme0n1p2
Opening filesystem to check...
checksum verify failed on 284041084928 found E4E3BDB6 wanted 00000000
checksum verify failed on 284041084928 found E4E3BDB6 wanted 00000000
bad tree block 284041084928, bytenr mismatch, want=284041084928, have=0
ERROR: cannot open file system

# btrfs check --tree-root 284042706944 /dev/nvme0n1p2
Opening filesystem to check...
checksum verify failed on 284042706944 found E4E3BDB6 wanted 00000000
checksum verify failed on 284042706944 found E4E3BDB6 wanted 00000000
bad tree block 284042706944, bytenr mismatch, want=284042706944, have=0
Couldn't read tree root
ERROR: cannot open file system

# btrfs check --tree-root 284041953280 /dev/nvme0n1p2
Opening filesystem to check...
checksum verify failed on 284041953280 found E4E3BDB6 wanted 00000000
checksum verify failed on 284041953280 found E4E3BDB6 wanted 00000000
bad tree block 284041953280, bytenr mismatch, want=284041953280, have=0
Couldn't read tree root
ERROR: cannot open file system

# btrfs check --tree-root 284041969664 /dev/nvme0n1p2
Opening filesystem to check...
checksum verify failed on 284041969664 found E4E3BDB6 wanted 00000000
checksum verify failed on 284041969664 found E4E3BDB6 wanted 00000000
bad tree block 284041969664, bytenr mismatch, want=284041969664, have=0
Couldn't read tree root
ERROR: cannot open file system

   If all backup fails to pass basic btrfs check, and all happen to have
   the same "wanted 00000000" then it means a big range of tree blocks
   get wiped out, not really related to btrfs but some hardware wipe.

Doesn't look good, does it? Any further ideas at all or is this the
end of the line? TBH, at this point, I don't mind having to re-install
the box so much as the idea that the same thing might happen again --
either to this one, or to my work machine, which is very similar. If
nothing else, I'd really appreciate knowing what exactly happened here
and why -- a bug in the GPU and/or its driver shouldn't cause this --;
and an avoidance strategy that goes beyond-upgrade-and-pray.
There are actually two possible ways I can think of a buggy GPU driver causing this type of issue:

* The GPU driver in some way caused memory corruption, which in turn caused other problems. * The GPU driver confused the GPU enough that it issued a P2P transfer on the PCI-e bus to the NVMe device, which in turn caused data corruption on the NVMe device.

Both are reasonably unlikely, but definitely possible. Your best option for mitigation (other than just not using that version of that GPU driver) is to ensure that your hardware has an IOMMU (as long as it's not a super-cheap CPU or MB, and both are relatively recent, you _should_ have one) and ensure it's enabled in firmware (on Intel platforms, it's usually labeled as 'VT-d' in firmware configuration, AMD platforms typically just call it an IOMMU).

However, there's also the possibility that you may have hardware issues. Any of your RAM, PSU, MB, or CPU being bad could easily cause both the data corruption you're seeing as well as the GPU issues, so I'd suggest double checking your hardware if you haven't already.



[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux