Sandy McArthur <sandymac@xxxxxxxxx> schrieb:
> I have a 4 disk RAID1 setup that fails to {mount,btrfsck} when disk 4
> is connected.
>
> With disk 4 attached btrfsck errors with:
> btrfsck: root-tree.c:46: btrfs_find_last_root: Assertion
> `!(path->slots[0] == 0)' failed
> (I'd have to reboot in a non-functioning state to get the full output.)
>
> I can mount the filesystem in a degraded state with the 4th drive
> removed. I believe there is some data corruption as I see lines in
> /var/log/messages from the degraded,ro filesystem like this:
>
> BTRFS info (device sdd1): csum failed ino 4433 off 3254538240 csum
> 1033749897 private 2248083221
>
> I'm at the point where all I can think to do is wipe disk 4 and then
> add it back in. Is there anything else I should try first. I have
> booted btrfs-next with the latest btrfs-progs.
It is a RAID-1 so why bother with the faulty drive? Just wipe it, put it
back in, then run a btrfs balance... There should be no data loss because
all data is stored twice (two-way mirroring).
Regards,
Kai
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html