On Fri, Aug 31, 2018 at 04:00:29PM -0700, Omar Sandoval wrote:
> On Thu, Aug 30, 2018 at 01:41:53PM -0400, Josef Bacik wrote:
> > From: Josef Bacik <jbacik@xxxxxx>
> >
> > Unify the extent_op handling as well, just add a flag so we don't
> > actually run the extent op from check_ref_cleanup and instead return a
> > value so that we can skip cleaning up the ref head.
> >
> > Signed-off-by: Josef Bacik <jbacik@xxxxxx>
> > ---
> > fs/btrfs/extent-tree.c | 17 +++++++++--------
> > 1 file changed, 9 insertions(+), 8 deletions(-)
> >
> > diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> > index 4c9fd35bca07..87c42a2c45b1 100644
> > --- a/fs/btrfs/extent-tree.c
> > +++ b/fs/btrfs/extent-tree.c
> > @@ -2443,18 +2443,23 @@ static void unselect_delayed_ref_head(struct btrfs_delayed_ref_root *delayed_ref
> > }
> >
> > static int cleanup_extent_op(struct btrfs_trans_handle *trans,
> > - struct btrfs_delayed_ref_head *head)
> > + struct btrfs_delayed_ref_head *head,
> > + bool run_extent_op)
> > {
> > struct btrfs_delayed_extent_op *extent_op = head->extent_op;
> > int ret;
> >
> > if (!extent_op)
> > return 0;
> > +
> > head->extent_op = NULL;
> > if (head->must_insert_reserved) {
> > btrfs_free_delayed_extent_op(extent_op);
> > return 0;
> > + } else if (!run_extent_op) {
> > + return 1;
> > }
> > +
> > spin_unlock(&head->lock);
> > ret = run_delayed_extent_op(trans, head, extent_op);
> > btrfs_free_delayed_extent_op(extent_op);
>
> So if cleanup_extent_op() returns 1, then the head was unlocked, unless
> run_extent_op was true. That's pretty confusing. Can we make it always
> unlock in the !must_insert_reserved case?
Agreed it's confusing. Possibly cleanup_extent_op can be split to two
helpers instead, but the locking semantics should be made more clear.