On 17.02.20 г. 13:29 ч., Qu Wenruo wrote:
>
>
> On 2020/2/17 下午6:47, Nikolay Borisov wrote:
>>
>>
>> On 17.02.20 г. 8:31 ч., Qu Wenruo wrote:
>>> This function will go next inline/keyed backref for
>>> btrfs_backref_iterator infrastructure.
>>>
>>> Signed-off-by: Qu Wenruo <wqu@xxxxxxxx>
>>> ---
>>> fs/btrfs/backref.c | 47 ++++++++++++++++++++++++++++++++++++++++++++++
>>> fs/btrfs/backref.h | 34 +++++++++++++++++++++++++++++++++
>>> 2 files changed, 81 insertions(+)
>>>
>>> diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
>>> index 8bd5e067831c..fb0abe344851 100644
>>> --- a/fs/btrfs/backref.c
>>> +++ b/fs/btrfs/backref.c
>>> @@ -2310,3 +2310,50 @@ int btrfs_backref_iterator_start(struct btrfs_backref_iterator *iterator,
>>> btrfs_backref_iterator_release(iterator);
>>> return ret;
>>> }
>>> +
>>> +int btrfs_backref_iterator_next(struct btrfs_backref_iterator *iterator)
>>
>> Document the return values: 0 in case there are more backerfs for the
>> given bytenr or 1 in case there are'nt. And a negative value in case of
>> error.
>>
>>> +{
>>> + struct extent_buffer *eb = btrfs_backref_get_eb(iterator);
>>> + struct btrfs_path *path = iterator->path;
>>> + struct btrfs_extent_inline_ref *iref;
>>> + int ret;
>>> + u32 size;
>>> +
>>> + if (btrfs_backref_iterator_is_inline_ref(iterator)) {
>>> + /* We're still inside the inline refs */
>>> + if (btrfs_backref_has_tree_block_info(iterator)) {
>>> + /* First tree block info */
>>> + size = sizeof(struct btrfs_tree_block_info);
>>> + } else {
>>> + /* Use inline ref type to determine the size */
>>> + int type;
>>> +
>>> + iref = (struct btrfs_extent_inline_ref *)
>>> + (iterator->cur_ptr);
>>> + type = btrfs_extent_inline_ref_type(eb, iref);
>>> +
>>> + size = btrfs_extent_inline_ref_size(type);
>>> + }
>>> + iterator->cur_ptr += size;
>>> + if (iterator->cur_ptr < iterator->end_ptr)
>>> + return 0;
>>> +
>>> + /* All inline items iterated, fall through */
>>> + }
>>
>> This if could be rewritten as:
>> if (btrfs_backref_iterator_is_inline_ref(iterator) && iterator->cur_ptr
>> < iterator->end_ptr)
>>
>> what this achieves is:
>>
>> 1. Clarity that this whole branch is executed only if we are within the
>> inline refs limits
>> 2. It also optimises that function since in the current version, after
>> the last inline backref has been processed iterator->cur_ptr ==
>> iterator->end_ptr. On the next call to btrfs_backref_iterator_next you
>> will execute (needlessly)
>>
>> (struct btrfs_extent_inline_ref *) (iterator->cur_ptr);
>> type = btrfs_extent_inline_ref_type(eb, iref);
>> size = btrfs_extent_inline_ref_size(type);
>> iterator->cur_ptr += size;
>> only to fail "if (iterator->cur_ptr < iterator->end_ptr)" check and
>> continue processing keyed items.
>>
>> As a matter of fact you will be reading past the metadata_item since
>> cur_ptr will be at the end of them and any deferences will read from the
>> next item this might not cause a crash but it's still wrong.
>
> This shouldn't happen, as we must ensure the cur_ptr < item_end for callers.
How are you ensuring this? Before processing the last inline ref
cur_ptr would be end_ptr - btrfs_extent_inline_ref_size(type);
After it's processed cur_ptr == end_ptr. THen you will do another call
to btrfs_backref_iterator_next which will do the same calculation? What
am I missing?
>
> For the _next() call, the check after increased cur_ptr check it's OK.
>
> But it's a problem for _start() call, as we may have a case where an
> EXTENT_ITEM/METADATA_ITEM has no inlined ref.
>
> I'll fix this in next version.
>
> Thanks,
> Qu
>
>>
>>
>>> + /* We're at keyed items, there is no inline item, just go next item */
>>> + ret = btrfs_next_item(iterator->fs_info->extent_root, iterator->path);
>>> + if (ret > 0 || ret < 0)
>>> + return ret;
>>
>> nit: if (ret != 0) return ret;
>>
>> <snip>
>>