On Thu, May 25, 2017 at 09:12:20PM +0300, Timofey Titovets wrote:
> Btrfs already skip store of data where compression didn't
> free at least one byte. Let's make logic better and make check
> that compression free at least one sector size
> because in another case it useless to store this data compressed
Yeah, there's a room for improvement.
Saving at least one sectorsize sounds ok to me. I'm not sure if this
should be implemented inside the compressors (lzo, zlib). There'res the
quick shortcut (the check "if (tot_in > 8192 && tot_in < tot_out)"), but
otherwise the overall decision whether to use the compressed data is
done in compress_file_range.
601 if (will_compress) {
602 /*
603 * we aren't doing an inline extent round the compressed size
604 * up to a block size boundary so the allocator does sane
605 * things
606 */
607 total_compressed = ALIGN(total_compressed, blocksize);
608
609 /*
610 * one last check to make sure the compression is really a
611 * win, compare the page count read with the blocks on disk
612 */
613 total_in = ALIGN(total_in, PAGE_SIZE);
614 if (total_compressed >= total_in) {
615 will_compress = 0;
616 } else {
...
so the check would go to line 614.
There's one case that your patch misses and it's the compressed inline extent.
As we'd never submit more than one sectorsize of data to compression, the
savings would be bigger than one page and thus we'd skip the compression.
This could be fixed easily though, but I'd like to use it as an example why the
decision should be moved upwards in the callchain (ie. to compress_file_range).
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html