On 2018年05月17日 16:19, Qu Wenruo wrote:
>
>
> On 2018年05月17日 16:14, Nikolay Borisov wrote:
>>
>>
>> On 17.05.2018 09:27, Qu Wenruo wrote:
>>> James Harvey reported that some corrupted compressed extent data can
>>> lead to various kernel memory corruption.
>>>
>>> Such corrupted extent data belongs to inode with NODATASUM flags, thus
>>> data csum won't help us detecting such bug.
>>>
>>> If lucky enough, kasan could catch it like:
>>> ==================================================================
>>> BUG: KASAN: slab-out-of-bounds in lzo_decompress_bio+0x384/0x7a0 [btrfs]
>>> Write of size 4096 at addr ffff8800606cb0f8 by task kworker/u16:0/2338
>>>
>>> CPU: 3 PID: 2338 Comm: kworker/u16:0 Tainted: G O 4.17.0-rc5-custom+ #50
>>> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
>>> Workqueue: btrfs-endio btrfs_endio_helper [btrfs]
>>> Call Trace:
>>> dump_stack+0xc2/0x16b
>>> print_address_description+0x6a/0x270
>>> kasan_report+0x260/0x380
>>> memcpy+0x34/0x50
>>> lzo_decompress_bio+0x384/0x7a0 [btrfs]
>>> end_compressed_bio_read+0x99f/0x10b0 [btrfs]
>>> bio_endio+0x32e/0x640
>>> normal_work_helper+0x15a/0xea0 [btrfs]
>>> process_one_work+0x7e3/0x1470
>>> worker_thread+0x1b0/0x1170
>>> kthread+0x2db/0x390
>>> ret_from_fork+0x22/0x40
>>> ...
>>> ==================================================================
>>>
>>> The offending compressed data has the following info:
>>>
>>> Header: length 32768 (Looks completely valid)
>>> Segment 0 Header: length 3472882419 (Obvious out of bounds)
>>>
>>> Then when handling segment 0, since it's over the current page, we need
>>> the compressed data to workspace, then such large size would trigger
>>> out-of-bounds memory access, screwing up the whole kernel.
>>>
>>> Fix it by adding extra checks on header and segment headers to ensure we
>>> won't access out-of-bounds, and even checks the decompressed data won't
>>> be out-of-bounds.
>>>
>>> Reported-by: James Harvey <jamespharvey20@xxxxxxxxx>
>>> Signed-off-by: Qu Wenruo <wqu@xxxxxxxx>
>>> ---
>>> fs/btrfs/lzo.c | 35 ++++++++++++++++++++++++++++++++++-
>>> 1 file changed, 34 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
>>> index 3d2ae4c08876..78ebc809072f 100644
>>> --- a/fs/btrfs/lzo.c
>>> +++ b/fs/btrfs/lzo.c
>>> @@ -281,6 +281,7 @@ static int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
>>> unsigned long working_bytes;
>>> size_t in_len;
>>> size_t out_len;
>>> + size_t max_segment_len = lzo1x_worst_compress(PAGE_SIZE);
>>> unsigned long in_offset;
>>> unsigned long in_page_bytes_left;
>>> unsigned long tot_in;
>>> @@ -294,6 +295,18 @@ static int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
>>>
>>> data_in = kmap(pages_in[0]);
>>> tot_len = read_compress_length(data_in);
>>> + /*
>>> + * Compressed data header check.
>>> + *
>>> + * The real compressed size can't exceed extent length, and all pages
>>> + * should be used (a full pending page is not possible).
>>> + * If this happens it means the compressed extent is corrupted.
>>> + */
>>> + if (tot_len > min_t(size_t, BTRFS_MAX_COMPRESSED, srclen) ||
>>> + tot_len < srclen - PAGE_SIZE) {
>>> + ret = -EUCLEAN;
>>> + goto done;
>>> + }
>>
>> So tot_len is the compressed size as written in the compressed stream,
>> whereas srclen is the number of bytes on-disk this compressed extent
>> take up (as derived from submit_compressed_extents). Shouldn't those two
>> always be equal, i.e perhaps an assert is in order?
>
> Nope, and in most case, tot_len is smaller than srclen (extent size).
>
> The compressed data can have pending zeros, and in fact, for 8K all
> "0xcd" lzo compressed data, tot_len is 100 bytes, and the remaining
> bytes are all pending zeros.
> While compressed extent size is always rounded up to sector size, and in
> that 8K all "0xcd" case, @srclen will be 4K.
>
> Thanks,
> Qu
>
>>
>> srclen comes from the async_extent struct, which in turns is
>> initialized in compress_file_range with the value of "total_compressed",
>> and the value there is actually initialized by
>> btrfs_compress_pages->lzo_compress_pages (that code makes me wanna sing
>> "You spin me right round, baby Right round like a record, baby").
Forgot to mention, that call sites are all *write* sites, not *read* sites.
For read, cb->compressed_len will just be the extent size.
Anyway, the check works fine for both sites.
Thanks,
Qu
>>
>>
>>>
>>> tot_in = LZO_LEN;
>>> in_offset = LZO_LEN;
>>> @@ -308,6 +321,17 @@ static int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
>>> in_offset += LZO_LEN;
>>> tot_in += LZO_LEN;
>>>
>>> + /*
>>> + * Segment header check.
>>> + *
>>> + * The segment length must not exceed max lzo compression
>>> + * size, nor the total compressed size
>>> + */
>>> + if (in_len > max_segment_len || tot_in + in_len > tot_len) {
>>> + ret = -EUCLEAN;
>>> + goto done;
>>> + }
>>> +
>>> tot_in += in_len;
>>> working_bytes = in_len;
>>> may_late_unmap = need_unmap = false;
>>> @@ -358,7 +382,7 @@ static int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
>>> }
>>> }
>>>
>>> - out_len = lzo1x_worst_compress(PAGE_SIZE);
>>> + out_len = max_segment_len;
>>> ret = lzo1x_decompress_safe(buf, in_len, workspace->buf,
>>> &out_len);
>>> if (need_unmap)
>>> @@ -368,6 +392,15 @@ static int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
>>> ret = -EIO;
>>> break;
>>> }
>>> + /*
>>> + * Decompressed data length check.
>>> + * The uncompressed data should not exceed uncompressed extent
>>> + * size.
>>> + */
>>> + if (tot_out + out_len > cb->len) {
>>> + ret = -EUCLEAN;
>>> + break;
>>> + }
>>>
>>> buf_start = tot_out;
>>> tot_out += out_len;
>>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html