Re: Data Deduplication with the help of an online filesystem check

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Michael,

> I'd start with a crc32 and/or MD5 to find candidate blocks, then do a 
> bytewise comparison before actually merging them. Even the risk of an 
> accidental collision is too high, and considering there are plenty of 
> birthday-style MD5 attacks it would not be extraordinarily difficult to 
> construct a block that collides with e.g. a system library.

I agree. But using a crc32 to identify blocks might me give to much
false positives, but actually someone need to try that in practice and
run some statics on real data to tell if it is the case.

> Keep in mind that although digests do a fairly good job of making
> unique identifiers for larger chunks of data, they can only hold so
> many unique combinations. Considering you're comparing blocks of a few
> kibibytes in size it's best to just do a foolproof comparison. There's
> nothing wrong with using a checksum/digest as a screening mechanism
> though.

Again absolutly agreed.

        Thomas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux