Re: duperemove : some real world figures on BTRFS deduplication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 2016-12-08 16:11 GMT+01:00 Swâmi Petaramesh <swami@xxxxxxxxxxxxxx>:
>
> Then it took another 48 hours just for "loading the hashes of duplicate
> extents".
>

This issue i adressing currently with the following patches:
https://github.com/Floyddotnet/duperemove/commits/digest_trigger

Tested with a 3,9 TB directory, with 4723 objects:

old implementation of dbfile_load_hashes took 36593ms
new implementation of dbfile_load_hashes took 11ms

You can use this versions save. But i have to do more work. (for
example a migrationscript for existing hashfiles)
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html





[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux