Am Wed, 2 Dec 2015 09:49:05 -0500 schrieb Austin S Hemmelgarn <ahferroin7@xxxxxxxxx>: > > So, 138 GB files use just 24 GB on disk - nice! > > > > However, I would still expect that compress=zlib has almost the same > > effect as compress-force=zlib, for 100% text files/logs. > > > That's better than 80% space savings (it works out to about 83.6%), > so I doubt that you'd manage to get anything better than that even > with only plain text files. It's interesting that there's such a big > discrepancy though, that indicates that BTRFS really needs some work > WRT deciding what to compress. As far as I understood from reading here, btrfs fairly quickly opts out of compressing further extents if it stumbles across the first block with a bad compression ratio for file. So, what I do is compress-force=zlib for my backup drive which holds several months of snapshots, new backups go to a scratch area which is snapshotted after rsync finishes (important: use --no-whole-file and --inplace). On my system drive I use compress=lzo and hope the heuristics work. >From time to time I use find and btrfs-defrag to selectively recompress files (using mtime and name filters) and defrag directory nodes (which according to docs should defrag metadata). A 3x TB btrfs mraid1 draid0 (1.6TB used) fits onto a 2TB backup drive with backlog worth around 4 months (daily backups). It looks pretty effective. Forcing zlib manages to compress file additions quite well although I didn't measure it lately. It was far from 80% but it was not far below 40-50%. I wish one could use per-subvolume compression option already. -- Regards, Kai Replies to list-only preferred. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
