On Mon, Apr 22, 2019 at 10:46:51AM +0300, Nikolay Borisov wrote:
> There is a certain idiom used in multiple places in btrfs' codebase,
> dealing with flushing an ordered range. Factor this in a separate
> function that can be reused. Future patches will replace the existing
> code with that function.
>
> Signed-off-by: Nikolay Borisov <nborisov@xxxxxxxx>
> ---
> fs/btrfs/ordered-data.c | 32 ++++++++++++++++++++++++++++++++
> fs/btrfs/ordered-data.h | 3 +++
> 2 files changed, 35 insertions(+)
>
> diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
> index 4d9bb0dea9af..65f6409c1c9f 100644
> --- a/fs/btrfs/ordered-data.c
> +++ b/fs/btrfs/ordered-data.c
> @@ -954,6 +954,38 @@ int btrfs_find_ordered_sum(struct inode *inode, u64 offset, u64 disk_bytenr,
> return index;
> }
>
> +/*
> + * btrfs_flush_ordered_range - Lock the passed range and ensures all pending
> + * ordered extents in it are run to completion.
> + *
> + * @tree: IO tree used for locking out other users of the range
> + * @inode: Inode whose ordered tree is to be searched
> + * @start: Beginning of range to flush
> + * @end: Last byte of range to lock
> + * @cached_state: If passed, will return the extent state responsible for the
> + * locked range. It's the caller's responsibility to free the cached state.
> + *
> + * This function always returns with the given range locked, ensuring after it's
> + * called no order extent can be pending.
> + */
> +void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree,
> + struct inode *inode, u64 start, u64 end,
> + struct extent_state **cached_state)
> +{
Please use btrfs_inode instead of inode for interfaces that are internal
to btrfs. This is not consistent but the plan is to switch everything to
btrfs_inode so new code should try to follow that.