On Fri, 19 Jun 2020 10:08:43 +0200 Daniel Smedegaard Buus <danielbuus@xxxxxxxxx> wrote: > Well, that's why I wrote having the *data* go bad, not the drive But data going bad wouldn't pass unnoticed like that (with reads resulting in bad data), since drives have end-to-end CRC checking, including on-disk and through the SATA interface. If data on-disk is somehow corrupted, that will be a CRC failure on read, and still an I/O error for the host. I only heard of some bad SSDs (SiliconMotion-based) returning corrupted data as if nothing happened, and only when their flash lifespan is close to depletion. > even though either scenario should still effectively end up yielding the > same behavior from btrfs I believe that's also an assumption you'd want to test, if you want to be through in verifying its behavior on failures or corruptions. And anyways it's better to set up a scenario which is as close as possible to ones you'd get in real-life. > But check out my retraction reply from earlier — it was just me being stupid > and forgetting to use conv=notrunc on my dd command used to damage the > loopback file :) Sure, I only commented on the part where it still made sense. :) -- With respect, Roman
