Re: Mass-Hardlinking Oops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* [Tracy Reed] 

> "clever" indeed. It creates filesystems with zillions of inodes which
> are a pain to work with. This is the sort of large storage application
> I would be looking to use btrfs for and apparently the currently
> implementation would croak.

As I understand it, the current implementation shouldn't croak unless
you keep a few hundred copies of the same file in one directory being
backed up, since the limit is apparently on hard links to the same file
in the same directory.  At least the last time I used it, BackupPC would
make a new tree for each backup (with hard links to the pool), so you
shouldn't hit this limit in the normal case.

However, speaking of BackupPC, it occurs to me that in the context of
btrfs, that kind of storage strategy looks fairly outmoded anyway, and
could benefit from using the block level copy-on-write features already
present in the file system (with a bit of block based data
de-duplication thrown in for good measure).

Øystein
-- 
My coat?  Oh, I left it in the bike shed..

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux