Re: getting rid of "csum failed" on a hw raid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 07, 2017 at 07:01:21PM +0200, Goffredo Baroncelli wrote:
> On 2017-06-07 17:58, Chris Murphy wrote:
> > 3. My take on this would have been to use btrfs restore and go after
> > the file path if I absolutely needed a copy of this file (no backup),
> > and then copied that back to the file system.
> 
> It might be useful to have a command to handle these situations: read all
> the good data, read even the wrong data logging the range were the
> checksum is incorrect.  The fact that you have problem in few bytes,
> doesn't mean that all the 4k sector has to be discarded.

This is what we did in the DOS times: even when the OS would fail, reading
via INT 13h did usually report an error but the memory you gave as the
target buffer would have all the data with just one or a few bits flipped --
or at worst, when a sector's header was hit, 512 bytes missing.

But that's not a good idea for regular read(), even for root: it's possible
the data is not yours and contains some sensitive info from an unrelated
file.

Thus, it'd have to be a special command or a special argument.


Meow!
-- 
⢀⣴⠾⠻⢶⣦⠀ A tit a day keeps the vet away.
⣾⠁⢰⠒⠀⣿⡁
⢿⡄⠘⠷⠚⠋⠀ (Rejoice as my small-animal-murder-machine got unbroken after
⠈⠳⣄⠀⠀⠀⠀ nearly two years of no catch!)
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux