Re: BackupPC, per-dir hard link limit, Debian packaging

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tuesday 02 March 2010 03:29:05 Robert Collins wrote:
> As I say, I realise this is queued to get addressed anyway, but it seems
> like a realistic thing for people to do (use BackupPC on btrfs) - even
> if something better still can be written to replace the BackupPC store
> in the future. I will note though, that simple snapshots won't achieve
> the deduplication level that BackupPC does, because the fils don't start
> out as the same: they are identified as being identical post-backup.

Isn't the main idea behind deduplication to merge identical parts of files 
together using cow? This way you could have many very similar images of 
virtual machines, run the deduplication process and reduce massively the space 
used while maintaining the differences between images.

If memory serves me right, the plan is to do it in userland on a post-fact 
filesystem, not when the data is being saved. If such a daemon or program was 
available you would run it on the system after rsyncing the workstations.

Though the question remains which system would reduce space usage more in your 
use case. From my experience, hardlinks take less space on disk, I don't know 
whatever it could be possible to optimise btrfs cow system for files that are 
exactly the same.

> 
> Cheers,
> Rob
> 

-- 
Hubert Kario
QBS - Quality Business Software
02-656 Warszawa, ul. Ksawerów 30/85
tel. +48 (22) 646-61-51, 646-74-24
www.qbs.com.pl

System Zarządzania Jakością
zgodny z normą ISO 9001:2000
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux