Re: Troubleshoot help needed - RAID1 not mounting : failed to read block groups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for your help. I compiled btrfs-progs v5.6 from github.

Here is the dump from /dev/sda : https://pastebin.com/e3YZxxsZ

And btrfs check returned an error instantly :
Opening filesystem to check...
ERROR: child eb corrupted: parent bytenr=5923702292480 item=2 parent level=2 child level=0
ERROR: failed to read block groups: Input/output error
ERROR: cannot open file system


‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, April 30, 2020 7:38 PM, Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:

> On Thu, Apr 30, 2020 at 5:57 AM Nouts nouts@xxxxxxxxxxxxxx wrote:
>
> > > [ 4645.402880] BTRFS info (device sdb): disk space caching is enabled
> > > [ 4645.405687] BTRFS info (device sdb): has skinny extents
> > > [ 4645.451484] BTRFS error (device sdb): failed to read block groups: -117
> > > [ 4645.472062] BTRFS error (device sdb): open_ctree failed
> > > mount: wrong fs type, bad option, bad superblock on /dev/sdb,missing codepage or helper program, or other error
> > > In some cases useful info is found in syslog - trydmesg | tail or so.
>
> > > I attached you the smartctl result from the day before and the last scrub report I got from a month ago. From my understanding, it was ok.
> > > I use hardlink (on the same partition/pool) and I deleted some data just the day before. I suspect my daily scrub routine triggered something that night and next day /home was gone.
> > > I can't scrub anymore as it's not mounted. Mounting with usebackuproot or degraded or ro produce the same error.
> > > I tried "btrfs check /dev/sda" :
> > > checking extents
> > > leaf parent key incorrect 5909107507200
> > > bad block 5909107507200
> > > Errors found in extent allocation tree or chunk allocation
> > > Checking filesystem on /dev/sda
> > > UUID: 3720251f-ef92-4e21-bad0-eae1c97cff03
>
> What do you get for:
>
> btrfs insp dump-t -b 5909107507200 /dev/sda
>
> > > Then "btrfs rescue zero-log /dev/sda", which produced a weird stacktrace...
>
> btrfs-progs is really old
>
> > > Finally I tried "btrfs rescue chunk-recover /dev/sda", which run on all 3 drives at the same time during 8+ hours...
> > > It asks to rebuild some metadata tree, which I accepted (I did not saved the full output sorry) and it ended with the same stacktrace as above.
> > > The only command left is "btrfs check --repair" but I afraid it might do more bad than good.
>
> With that version of btrfs-progs it's not advised.
>
> > > I'm running Debian 9 (still, because of some dependencies). My kernel is already backported : 4.19.0-0.bpo.6-amd64 #1 SMP Debian 4.19.67-2+deb10u2~bpo9+1 (2019-11-12) x86_64 GNU/Linux
> > > btrfs version : v4.7.3
>
> I suggest finding newer btrfs-progs, 5.4 or better, or compiling it from git.
> https://github.com/kdave//btrfs-progs
>
> And then run:
>
> btrfs check /dev/sda
>
> Let's see what that says.
>
>
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Chris Murphy






[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux