On Mon, Dec 17, 2012 at 06:33:24AM -0700, Alexander Block wrote: > I did some research on deduplication in the past and there are some > problems that you will face. I'll try to list some of them (for sure > not all). Thanks Alexander for writing all of this up. There are a lot of great points here, but I'll summarize with: [ many challenges to online dedup ] [ offline dedup is the best way ] So, the big problem with offline dedup is you're suddenly read bound. I don't disagree that offline makes a lot of the dedup problems easier, and Alexander describes a very interesting system here. I've tried to avoid features that rely on scanning though, just because idle disk time may not really exist. But with scrub, we have the scan as a feature, and it may make a lot of sense to leverage that. online dedup has a different set of tradeoffs, but as Alexander says the hard part really is the data structure to index the hashes. I think there are a few different options here, including changing the file extent pointers to point to a sha instead of a logical disk offset. So, part of my answer really depends on where you want to go with your thesis. I expect the data structure work for efficient hash lookup is going to be closer to what your course work requires? -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
