Based on kdave for-next
As heuristic skeleton already merged
Populate heuristic with basic code.
First patch: add simple sampling code
It's get 16 byte samples with 256 bytes shifts
over input data. Collect info about how many
different bytes (symbols) has been found in sample data
Second patch: add code for calculate
how many unique bytes has been
found in sample data
That can fast detect easy compressible data
Third patch: add code for calculate byte core set size
i.e. how many unique bytes use 90% of sample data
That code require that numbers in bucket must be sorted
That can detect easy compressible data with many repeated bytes
That can detect not compressible data with evenly distributed bytes
Changes v1 -> v2:
- Change input data iterator shift 512 -> 256
- Replace magic macro numbers with direct values
- Drop useless symbol population in bucket
as no one care about where and what symbol stored
in bucket at now
Changes v2 -> v3 (only update #3 patch):
- Fix u64 division problem by use u32 for input_size
- Fix input size calculation start - end -> end - start
- Add missing sort.h header
Changes v3 -> v4 (only update #1 patch):
- Change counter type in bucket item u16 -> u32
- Drop other fields from bucket item for now,
no one use it
Timofey Titovets (3):
Btrfs: heuristic add simple sampling logic
Btrfs: heuristic add byte set calculation
Btrfs: heuristic add byte core set calculation
fs/btrfs/compression.c | 109 ++++++++++++++++++++++++++++++++++++++++++++++++-
fs/btrfs/compression.h | 11 +++++
2 files changed, 118 insertions(+), 2 deletions(-)
--
2.14.1
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html