Re: Possible Raid Bug

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 25, 2016 at 1:57 PM, Patrik Lundquist
<patrik.lundquist@xxxxxxxxx> wrote:

>
> Only errors on the device formerly known as /dev/sde, so why won't it
> mount degraded,rw? Now I'm stuck like Stephen.
>
> # btrfs device usage /mnt
> /dev/sdb, ID: 1
>    Device size:             2.00GiB
>    Data,single:           624.00MiB       <<----------
>    Data,RAID10:           102.38MiB
>    Metadata,RAID10:       102.38MiB
>    System,RAID10:           4.00MiB
>    Unallocated:             1.19GiB
>
> /dev/sdc, ID: 2
>    Device size:             2.00GiB
>    Data,RAID10:           102.38MiB
>    Metadata,RAID10:       102.38MiB
>    System,single:          32.00MiB       <<----------
>    System,RAID10:           4.00MiB
>    Unallocated:             1.76GiB
>
> /dev/sdd, ID: 3
>    Device size:             2.00GiB
>    Data,RAID10:           102.38MiB
>    Metadata,single:       256.00MiB       <<----------
>    Metadata,RAID10:       102.38MiB
>    System,RAID10:           4.00MiB
>    Unallocated:             1.55GiB
>
> missing, ID: 4
>    Device size:               0.00B
>    Data,RAID10:           102.38MiB
>    Metadata,RAID10:       102.38MiB
>    System,RAID10:           4.00MiB
>    Unallocated:             1.80GiB
>
> The data written while mounted degraded is in profile 'single' and
> will have to be converted to 'raid10' once the filesystem is whole
> again.
>
> So what do I do now? Why did it degrade further after a reboot?

You're hosed. The file system is read only and can't be fixed. It's an
old bug. It's not a data loss bug, but it's major time loss bug
because now the volume has to be rebuilt, and totally unworkable for
production use.

While the appearance of the single chunks is one bug that shouldn't
happen, the worse bug is the truly bogus one that claims there aren't
enough drives for rw degraded mount. Those single chunks aren't on the
missing drive. They're on the three remaining ones. So the rw fail is
just a bad bug. It's a PITA but at least it's not a data loss bug.

Basically you get one chance to mount rw,degraded and you have to fix
the problem at that time. And you have to balance away any phantom
single chunks that have appeared. For what it's worth it's not the
reboot that degraded it further, it's the unmount and then attempt to
mount rw,degraded a 2nd time that's not allowed due to this bug.


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux