Re: first it froze, now the (btrfs) root fs won't mount ...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[Please CC me, I'm not on the list.]

Am Mo., 21. Okt. 2019 um 15:34 Uhr schrieb Qu Wenruo <quwenruo.btrfs@xxxxxxx>:
> [...] just fstrim wiped some old tree blocks. But maybe it's some unfortunate race, that fstrim trimmed some tree blocks still in use.

Forgive me for asking, but assuming that's what happened, why are the
backup blocks "not in use" from fstrim's perspective in the first
place? I'd consider backup (meta)data to be valuable payload data,
something to be stored extra carefully. No use making them if they're
no goo when you need them, after all. In other words, does fstrim by
default trim btrfs metadata (in which case fstrim's broken) or does
btrfs in effect store backup data in "unused" space (in which case
btrfs is broken)?

> [...] One good compromise is, only trim unallocated space.

It had never occurred to me that anything would purposely try to trim
allocated space ...

> As your corruption is only in extent tree. With my patchset, you should be able to mount it, so it's not that screwed up.

To be clear, we're talking data recovery, not (progress towards) fs
repair, even if I manage to boot your rescue patchset?

A few more random observations from playing with the drive image:
$ btrfs check --init-extent-tree patient
Opening filesystem to check...
Checking filesystem on patient
UUID: c2bd83d6-2261-47bb-8d18-5aba949651d7
repair mode will force to clear out log tree, are you sure? [y/N]: y
ERROR: Corrupted fs, no valid METADATA block group found
ERROR: failed to zero log tree: -117
ERROR: attempt to start transaction over already running one
# rollback

$ btrfs rescue zero-log patient
checksum verify failed on 284041084928 found E4E3BDB6 wanted 00000000
checksum verify failed on 284041084928 found E4E3BDB6 wanted 00000000
bad tree block 284041084928, bytenr mismatch, want=284041084928, have=0
ERROR: could not open ctree
# rollback

# hm, super 0 has log_root 284056535040, super 1 and 2 have log_root 0 ...
$ btrfs check -s1 --init-extent-tree patient
[...]
ERROR: errors found in fs roots
No device size related problem found
cache and super generation don't match, space cache will be invalidated
found 431478808576 bytes used, error(s) found
total csum bytes: 417926772
total tree bytes: 2203549696
total fs tree bytes: 1754415104
total extent tree bytes: 49152
btree space waste bytes: 382829965
file data blocks allocated: 1591388033024
 referenced 539237134336

That ran a good while, generating a couple of hundred MB of output
(available on request, of course). In any case, it didn't help.

$ ~/local/bin/btrfs check -s1 --repair patient
using SB copy 1, bytenr 67108864
enabling repair mode
Opening filesystem to check...
checksum verify failed on 427311104 found 000000C8 wanted FFFFFF99
checksum verify failed on 427311104 found 000000C8 wanted FFFFFF99
Csum didn't match
ERROR: cannot open file system

I don't suppose the roots found by btrfs-find-root and/or subvolumes
identified by btrfs restore -l would be any help? It's not like the
real fs root contained anything, just @ [/], @home [/home], and the
Timeshift subvolumes. If btrfs restore -D is to be believed, the
casualties under @home, for example, are inconsequential, caches and
the like, stuff that was likely open for writing at the time.

I don't know, it just seems strange that with all the (meta)data
that's obviously still there, it shouldn't be possible to restore the
fs to some sort of consistent state.

Good night,
Christian











>
> But extent tree update is really somehow trickier than I thought.
>
> Thanks,
> Qu
>
> >
> > Will keep you posted.
> >
> > Cheers,
> > C.
> >
>



[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux