Recovery advice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a 4 disk RAID1 setup that fails to {mount,btrfsck} when disk 4
is connected.

With disk 4 attached btrfsck errors with:
btrfsck: root-tree.c:46: btrfs_find_last_root: Assertion
`!(path->slots[0] == 0)' failed
(I'd have to reboot in a non-functioning state to get the full output.)

I can mount the filesystem in a degraded state with the 4th drive
removed. I believe there is some data corruption as I see lines in
/var/log/messages from the degraded,ro filesystem like this:

BTRFS info (device sdd1): csum failed ino 4433 off 3254538240 csum
1033749897 private 2248083221

I'm at the point where all I can think to do is wipe disk 4 and then
add it back in. Is there anything else I should try first. I have
booted btrfs-next with the latest btrfs-progs.

Thanks.
--
Sandy McArthur

"He who dares not offend cannot be honest."
- Thomas Paine
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux