----- Ursprüngliche Mail -----
> Von: "Zygo Blaxell" <ce3g8jdj@xxxxxxxxxxxxxxxxxxxxx>
> An: "richard" <richard@xxxxxx>
> CC: "linux-btrfs" <linux-btrfs@xxxxxxxxxxxxxxx>
> Gesendet: Freitag, 8. November 2019 23:25:57
> Betreff: Re: Decoding "unable to fixup (regular)" errors
> On Fri, Nov 08, 2019 at 11:21:56PM +0100, Richard Weinberger wrote:
>> ----- Ursprüngliche Mail -----
>> > btrfs found corrupted data on md1. You appear to be using btrfs
>> > -dsingle on a single mdadm raid1 device, so no recovery is possible
>> > ("unable to fixup").
>> >
>> >> The system has ECC memory with md1 being a RAID1 which passes all health checks.
>> >
>> > mdadm doesn't have any way to repair data corruption--it can find
>> > differences, but it cannot identify which version of the data is correct.
>> > If one of your drives is corrupting data without reporting IO errors,
>> > mdadm will simply copy the corruption to the other drive. If one
>> > drive is failing by intermittently injecting corrupted bits into reads
>> > (e.g. because of a failure in the RAM on the drive control board),
>> > this behavior may not show up in mdadm health checks.
>>
>> Well, this is not cheap hardware...
>> Possible, but not very likely IMHO
>
> Even the disks? We see RAM failures in disk drive embedded boards from
> time to time.
Yes. Enterprise-Storage RAID-Edition disks (sorry for the marketing buzzwords).
Even if one disk is silently corrupting data, having the bad block copied to
the second disk is even more less likely to happen.
And I run the RAID-Health check often.
Thanks,
//richard