Re: [PATCH v3 2/3] btrfs: backref: Implement btrfs_backref_iterator_next()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 17.02.20 г. 8:31 ч., Qu Wenruo wrote:
> This function will go next inline/keyed backref for
> btrfs_backref_iterator infrastructure.
> 
> Signed-off-by: Qu Wenruo <wqu@xxxxxxxx>
> ---
>  fs/btrfs/backref.c | 47 ++++++++++++++++++++++++++++++++++++++++++++++
>  fs/btrfs/backref.h | 34 +++++++++++++++++++++++++++++++++
>  2 files changed, 81 insertions(+)
> 
> diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
> index 8bd5e067831c..fb0abe344851 100644
> --- a/fs/btrfs/backref.c
> +++ b/fs/btrfs/backref.c
> @@ -2310,3 +2310,50 @@ int btrfs_backref_iterator_start(struct btrfs_backref_iterator *iterator,
>  	btrfs_backref_iterator_release(iterator);
>  	return ret;
>  }
> +
> +int btrfs_backref_iterator_next(struct btrfs_backref_iterator *iterator)

Document the return values: 0 in case there are more backerfs for the
given bytenr or 1 in case there are'nt. And a negative value in case of
error.

> +{
> +	struct extent_buffer *eb = btrfs_backref_get_eb(iterator);
> +	struct btrfs_path *path = iterator->path;
> +	struct btrfs_extent_inline_ref *iref;
> +	int ret;
> +	u32 size;
> +
> +	if (btrfs_backref_iterator_is_inline_ref(iterator)) {
> +		/* We're still inside the inline refs */
> +		if (btrfs_backref_has_tree_block_info(iterator)) {
> +			/* First tree block info */
> +			size = sizeof(struct btrfs_tree_block_info);
> +		} else {
> +			/* Use inline ref type to determine the size */
> +			int type;
> +
> +			iref = (struct btrfs_extent_inline_ref *)
> +				(iterator->cur_ptr);
> +			type = btrfs_extent_inline_ref_type(eb, iref);
> +
> +			size = btrfs_extent_inline_ref_size(type);
> +		}
> +		iterator->cur_ptr += size;
> +		if (iterator->cur_ptr < iterator->end_ptr)
> +			return 0;
> +
> +		/* All inline items iterated, fall through */
> +	}

This if could be rewritten as:
if (btrfs_backref_iterator_is_inline_ref(iterator) && iterator->cur_ptr
< iterator->end_ptr)

what this achieves is:

1. Clarity that this whole branch is executed only if we are within the
inline refs limits
2. It also optimises that function since in the current version, after
the last inline backref has been processed iterator->cur_ptr ==
iterator->end_ptr. On the next call to btrfs_backref_iterator_next you
will execute (needlessly)

(struct btrfs_extent_inline_ref *) (iterator->cur_ptr);
type = btrfs_extent_inline_ref_type(eb, iref);
size = btrfs_extent_inline_ref_size(type);
iterator->cur_ptr += size;
only to fail "if (iterator->cur_ptr < iterator->end_ptr)" check and
continue processing keyed items.

As a matter of fact you will be reading past the metadata_item  since
cur_ptr will be at the end of them and any deferences will read from the
next item this might not cause a crash but it's still wrong.


> +	/* We're at keyed items, there is no inline item, just go next item */
> +	ret = btrfs_next_item(iterator->fs_info->extent_root, iterator->path);
> +	if (ret > 0 || ret < 0)
> +		return ret;

nit: if (ret != 0) return ret;

<snip>



[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux