On Mon, Mar 24, 2014 at 05:58:10PM +0800, Wang Shilong wrote: > To compress a small write(<=blocksize) dosen't save us > disk space at all, skip it can save us some compression time. The compressibility depends on the data, a block full of zeros can compress pretty well, so your patch is too limiting IMO. > This patch can also fix wrong setting nocompression flag for > inode, say a case when @total_in is 4096, and then we get > @total_compressed 52,because we do aligment to page cache size > firstly, and then we get into conclusion @total_in=@total_compressed > thus we will clear this inode's compression flag. This is a bug but can be fixed without disabling compression of small blocks. I have a similar patch as part of the large compression update, the logic that decides if the small extent should be compressed or not depends on the compression algo and some typical data samples. for zlib it's around ~100 B and lzo at ~200 B. That's a boundary where compressed size == uncompressed, so there's no benefit, only additional overhead. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
