On Fri, 19 Jun 2020 at 09:45, Roman Mamedov <rm@xxxxxxxxxxx> wrote: > > On Fri, 19 Jun 2020 09:24:26 +0200 > Daniel Smedegaard Buus <danielbuus@xxxxxxxxx> wrote: > > > I was testing btrfs to see data checksumming behavior when > > encountering a rotten area, so I set up a loop device backed by a 1GB > > file. I filled it with a compressed file and made it rot with, e.g., > > > > dd if=/dev/zero of=loopie bs=1k seek=800000 count=1 > > > > That is, the equivalent of having data on a single block on an actual > > hard drive go bad. > > Not really, because when real on-disk sectors go bad, the (properly behaving) > drive will return I/O errors, not blocks of zeroes instead. > Well, that's why I wrote having the *data* go bad, not the drive, even though either scenario should still effectively end up yielding the same behavior from btrfs, albeit more slowly, and with more chatter in the kernel log :D But check out my retraction reply from earlier — it was just me being stupid and forgetting to use conv=notrunc on my dd command used to damage the loopback file :) Btrfs behaves exactly as expected when I damage the loop back file properly. Cheers :) Daniel > Roman
