Re: Troubleshoot help needed - RAID1 not mounting : failed to read block groups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello again,
I'm not familiar with mailing list. Should I expect an answer sooner or later ?
As I need to get back on track as soon as possible, I would like to know if it's too complicated to get an answer quickly from you.
I don't want to be rude, I just want to know if I should wait long enough for an answer that might save my day and my data. Or I'm doomed and I should have wipe my drive already ?

I'll take any answer :)
Thank you

Nouts

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Tuesday, April 28, 2020 11:26 AM, Nouts <nouts@xxxxxxxxxxxxxx> wrote:

> Hello,
>
> I am having issue with a RAID1 btrfs pool "failed to read block groups". I was advised to send information to this mailing list, as someone might be interested in debug logs and might also help solve my issue.
>
> I have a 3 drive RAID1 pool (2x3TB + 1x6TB), mounted as /home.
>
> My system became really slow while doing nothing, and after a reboot my /home pool can't mount.This is the error I got :
>
> [ 4645.402880] BTRFS info (device sdb): disk space caching is enabled
> [ 4645.405687] BTRFS info (device sdb): has skinny extents
> [ 4645.451484] BTRFS error (device sdb): failed to read block groups: -117
> [ 4645.472062] BTRFS error (device sdb): open_ctree failed
> mount: wrong fs type, bad option, bad superblock on /dev/sdb,missing codepage or helper program, or other error
> In some cases useful info is found in syslog - trydmesg | tail or so.
>
> I attached you the smartctl result from the day before and the last scrub report I got from a month ago. From my understanding, it was ok.
> I use hardlink (on the same partition/pool) and I deleted some data just the day before. I suspect my daily scrub routine triggered something that night and next day /home was gone.
>
> I can't scrub anymore as it's not mounted. Mounting with usebackuproot or degraded or ro produce the same error.
> I tried "btrfs check /dev/sda" :
> checking extents
> leaf parent key incorrect 5909107507200
> bad block 5909107507200
> Errors found in extent allocation tree or chunk allocation
> Checking filesystem on /dev/sda
> UUID: 3720251f-ef92-4e21-bad0-eae1c97cff03
>
> Then "btrfs rescue super-recover /dev/sda" :
> All supers are valid, no need to recover
>
> Then "btrfs rescue zero-log /dev/sda", which produced a weird stacktrace...
> Unable to find block group for 0
> extent-tree.c:289: find_search_start: Assertion '1' failed.
> btrfs[0x43e418]
> btrfs(btrfs_reserve_extent+0x5c9)[0x4425df]
> btrfs(btrfs_alloc_free_block+0x63[0x44297c]
> btrfs(__btrfs_cow_block+0xfc[0x436636]
> btrfs(btrfs_cow_block+0x8b)[0x436bd8]
> btrfs[0x43ad82]
> btrfs(btrfs_commit_transaction+0xb8)[0x43c5dc]
> btrfs[0x42c0d4]btrfs(main+0x12f)[0x40a341]/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f1462d712e1]
> btrfs(_start+0x2a)[0x40a37a]
> Clearing log on /dev/sda, previous log_root 0, level 0
>
> Finally I tried "btrfs rescue chunk-recover /dev/sda", which run on all 3 drives at the same time during 8+ hours...
> It asks to rebuild some metadata tree, which I accepted (I did not saved the full output sorry) and it ended with the same stacktrace as above.
>
> The only command left is "btrfs check --repair" but I afraid it might do more bad than good.
>
> I'm running Debian 9 (still, because of some dependencies). My kernel is already backported : 4.19.0-0.bpo.6-amd64 #1 SMP Debian 4.19.67-2+deb10u2~bpo9+1 (2019-11-12) x86_64 GNU/Linux
> btrfs version : v4.7.3
> I originally posted on reddit : https://www.reddit.com/r/btrfs/comments/g99v4v/nas_raid1_not_mounting_failed_to_read_block_groups/
>
> Let me know if you need more information.
>
> Nouts






[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux