Re: compression disk space saving - what are your results?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



if you have been waiting for a particular compressor to reach linux,
chances are it already has.

if you are slacking with btrfs and assuming someone will port your
favorite compression profile to a btrfs mount option someday, someone
has thought of that too, and that's already happened as well.



Add support for LZ4-compressed kernel [LWN.net]
https://lwn.net/Articles/541425/

bzip2/lzma kernel compression [LWN.net] https://lwn.net/Articles/314295/

Btrfs Picks Up Snappy Compression Support - Phoronix
http://www.phoronix.com/scan.php?page=news_item&px=MTA0MjQ

fusecompress - Transparent compression FUSE filesystem (0.9.x tree) -
Google Project Hosting https://code.google.com/p/fusecompress/

On Mon, Dec 21, 2015 at 7:55 PM, Kai Krakow <hurikhan77@xxxxxxxxx> wrote:
> Am Wed, 2 Dec 2015 09:49:05 -0500
> schrieb Austin S Hemmelgarn <ahferroin7@xxxxxxxxx>:
>
>> > So, 138 GB files use just 24 GB on disk - nice!
>> >
>> > However, I would still expect that compress=zlib has almost the same
>> > effect as compress-force=zlib, for 100% text files/logs.
>> >
>> That's better than 80% space savings (it works out to about 83.6%),
>> so I doubt that you'd manage to get anything better than that even
>> with only plain text files.  It's interesting that there's such a big
>> discrepancy though, that indicates that BTRFS really needs some work
>> WRT deciding what to compress.
>
> As far as I understood from reading here, btrfs fairly quickly opts out
> of compressing further extents if it stumbles across the first block
> with a bad compression ratio for file.
>
> So, what I do is compress-force=zlib for my backup drive which holds
> several months of snapshots, new backups go to a scratch area which is
> snapshotted after rsync finishes (important: use --no-whole-file and
> --inplace).
>
> On my system drive I use compress=lzo and hope the heuristics work.
> From time to time I use find and btrfs-defrag to selectively recompress
> files (using mtime and name filters) and defrag directory nodes (which
> according to docs should defrag metadata).
>
> A 3x TB btrfs mraid1 draid0 (1.6TB used) fits onto a 2TB backup drive
> with backlog worth around 4 months (daily backups). It looks pretty
> effective. Forcing zlib manages to compress file additions quite well
> although I didn't measure it lately. It was far from 80% but it was not
> far below 40-50%.
>
> I wish one could use per-subvolume compression option already.
>
> --
> Regards,
> Kai
>
> Replies to list-only preferred.
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux