I've noticed that some of the parameters passed to compress_pages are redundant
and that we can either reuse a parameter for both input and output value
(number of pages) or we can infer some value from the existing parameters (the
maximum output limit).
There's no functional change, the stack consumption is slightly smaller.
David Sterba (5):
btrfs: merge length input and output parameter in compress_pages
btrfs: merge nr_pages input and output parameter in compress_pages
btrfs: export compression buffer limits in a header
btrfs: use predefined limits for calculating maximum number of pages for compression
btrfs: derive maximum output size in the compression implementation
fs/btrfs/compression.c | 33 ++++++++++++++-------------------
fs/btrfs/compression.h | 28 +++++++++++++++++++---------
fs/btrfs/inode.c | 37 +++++++++++++------------------------
fs/btrfs/lzo.c | 10 +++++-----
fs/btrfs/zlib.c | 9 +++++----
5 files changed, 56 insertions(+), 61 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html