Re: Data Deduplication with the help of an online filesystem check

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am Dienstag, 28. April 2009 19:38:24 schrieb Chris Mason:
> On Tue, 2009-04-28 at 19:34 +0200, Thomas Glanzmann wrote:
> > Hello,
> >
> > > I wouldn't rely on crc32: it is not a strong hash,
> > > Such deduplication can lead to various problems,
> > > including security ones.
> >
> > sure thing, did you think of replacing crc32 with sha1 or md5, is this
> > even possible (is there enough space reserved so that the change can be
> > done without changing the filesystem layout) at the moment with btrfs?
>
> It is possible, there's room in the metadata for about about 4k of
> checksum for each 4k of data.  The initial btrfs code used sha256, but
> the real limiting factor is the CPU time used.
>
> -chris
>
It's not only cpu time, it's also memory. You need 32 byte for each 4k block. 
It needs to be in RAM for performance reason.

hjc
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux