On Fri, Nov 08, 2019 at 11:21:56PM +0100, Richard Weinberger wrote:
> ----- Ursprüngliche Mail -----
> > btrfs found corrupted data on md1. You appear to be using btrfs
> > -dsingle on a single mdadm raid1 device, so no recovery is possible
> > ("unable to fixup").
> >
> >> The system has ECC memory with md1 being a RAID1 which passes all health checks.
> >
> > mdadm doesn't have any way to repair data corruption--it can find
> > differences, but it cannot identify which version of the data is correct.
> > If one of your drives is corrupting data without reporting IO errors,
> > mdadm will simply copy the corruption to the other drive. If one
> > drive is failing by intermittently injecting corrupted bits into reads
> > (e.g. because of a failure in the RAM on the drive control board),
> > this behavior may not show up in mdadm health checks.
>
> Well, this is not cheap hardware...
> Possible, but not very likely IMHO
Even the disks? We see RAM failures in disk drive embedded boards from
time to time.
> >> I tried to find the inodes behind the erroneous addresses without success.
> >> e.g.
> >> $ btrfs inspect-internal logical-resolve -v -P 593483341824 /
> >> ioctl ret=0, total_size=4096, bytes_left=4080, bytes_missing=0, cnt=0, missed=0
> >> $ echo $?
> >> 1
> >
> > That usually means the file is deleted, or the specific blocks referenced
> > have been overwritten (i.e. there are no references to the given block in
> > any existing file, but a reference to the extent containing the block
> > still exists). Although it's not possible to reach those blocks by
> > reading a file, a scrub or balance will still hit the corrupted blocks.
> >
> > You can try adding or subtracting multiples of 4096 to the block number
> > to see if you get a hint about which inodes reference this extent.
> > The first block found in either direction should be a reference to the
> > same extent, though there's no easy way (other than dumping the extent
> > tree with 'btrfs ins dump-tree -t 2' and searching for the extent record
> > containing the block number) to figure out which. Extents can be up to
> > 128MB long, i.e. 32768 blocks.
>
> Thanks for the hint!
>
> > Or modify 'btrfs ins log' to use LOGICAL_INO_V2 and the IGNORE_OFFSETS
> > flag.
> >
> >> My kernel is 4.12.14-lp150.12.64-default (OpenSUSE 15.0), so not super recent
> >> but AFAICT btrfs should be sane
> >> there. :-)
> >
> > I don't know of specific problems with csums in 4.12, but I'd upgrade that
> > for a dozen other reasons anyway. One of those is that LOGICAL_INO_V2
> > was merged in 4.15.
> >
> >> What could cause the errors and how to dig further?
> >
> > Probably a silent data corruption on one of the underlying disks.
> > If you convert this mdadm raid1 to btrfs raid1, btrfs will tell you
> > which disk the errors are coming from while also correcting them.
>
> Hmm, I don't really buy this reasoning. Like I said, this is not
> cheap/consumer hardware.
>
> Thanks,
> //richard
Attachment:
signature.asc
Description: PGP signature
