Hi, Some real world figures about running duperemove deduplication on BTRFS : I have an external 2,5", 5400 RPM, 1 TB HD, USB3, on which I store the BTRFS backups (full rsync) of 5 PCs, using 2 different distros, typically at the same update level, and all of them more of less sharing the entirety or part of the same set of user files. For each of these PCs I keep a series of 4-5 BTRFS subvolume snapshots for having complete backups at different points in time. The HD was full to 93% and made a good testbed for deduplicating. So I ran duperemove on this HD, on a machine doing "only this", using a hashfile. The machine being an Intel i5 with 6 GB of RAM. Well, the damn thing has been running for 15 days uninterrupted ! ...Until I [Ctrl]-C it this morning as I had to move with the machine (I wasn't expecting it to last THAT long...). It took about 48 hours just for calculating the files hashes. Then it took another 48 hours just for "loading the hashes of duplicate extents". Then it took 11 days deduplicating until I killed it. At the end, the disk that was 93% full is now 76% full, so I saved 17% of 1 TB (170 GB) by deduplicating for 15 days. Well the thing "works" and my disk isn't full anymore, so that's a very partial success, but still l wonder if the gain is worth the effort... Best regards. ॐ -- Swâmi Petaramesh <swami@xxxxxxxxxxxxxx> PGP 9076E32E -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
