On Tue, Nov 27, 2018 at 10:29:57AM +0200, Nikolay Borisov wrote:
>
>
> On 21.11.18 г. 21:09 ч., Josef Bacik wrote:
> > The throttle path doesn't take cleaner_delayed_iput_mutex, which means
>
> Which one is the throttle path? btrfs_end_transaction_throttle is only
> called during snapshot drop and relocation? What scenario triggered your
> observation and prompted this fix?
>
One of my enospc tests runs snapshot creation/deletion in the background.
> > we could think we're done flushing iputs in the data space reservation
> > path when we could have a throttler doing an iput. There's no real
> > reason to serialize the delayed iput flushing, so instead of taking the
> > cleaner_delayed_iput_mutex whenever we flush the delayed iputs just
> > replace it with an atomic counter and a waitqueue. This removes the
> > short (or long depending on how big the inode is) window where we think
> > there are no more pending iputs when there really are some.
> >
> > Signed-off-by: Josef Bacik <josef@xxxxxxxxxxxxxx>
> > ---
> > fs/btrfs/ctree.h | 4 +++-
> > fs/btrfs/disk-io.c | 5 ++---
> > fs/btrfs/extent-tree.c | 9 +++++----
> > fs/btrfs/inode.c | 21 +++++++++++++++++++++
> > 4 files changed, 31 insertions(+), 8 deletions(-)
> >
> > diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
> > index 709de7471d86..a835fe7076eb 100644
> > --- a/fs/btrfs/ctree.h
> > +++ b/fs/btrfs/ctree.h
> > @@ -912,7 +912,8 @@ struct btrfs_fs_info {
> >
> > spinlock_t delayed_iput_lock;
> > struct list_head delayed_iputs;
> > - struct mutex cleaner_delayed_iput_mutex;
> > + atomic_t nr_delayed_iputs;
> > + wait_queue_head_t delayed_iputs_wait;
> >
> > /* this protects tree_mod_seq_list */
> > spinlock_t tree_mod_seq_lock;
> > @@ -3237,6 +3238,7 @@ int btrfs_orphan_cleanup(struct btrfs_root *root);
> > int btrfs_cont_expand(struct inode *inode, loff_t oldsize, loff_t size);
> > void btrfs_add_delayed_iput(struct inode *inode);
> > void btrfs_run_delayed_iputs(struct btrfs_fs_info *fs_info);
> > +int btrfs_wait_on_delayed_iputs(struct btrfs_fs_info *fs_info);
> > int btrfs_prealloc_file_range(struct inode *inode, int mode,
> > u64 start, u64 num_bytes, u64 min_size,
> > loff_t actual_len, u64 *alloc_hint);
> > diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> > index c5918ff8241b..3f81dfaefa32 100644
> > --- a/fs/btrfs/disk-io.c
> > +++ b/fs/btrfs/disk-io.c
> > @@ -1692,9 +1692,7 @@ static int cleaner_kthread(void *arg)
> > goto sleep;
> > }
> >
> > - mutex_lock(&fs_info->cleaner_delayed_iput_mutex);
> > btrfs_run_delayed_iputs(fs_info);
> > - mutex_unlock(&fs_info->cleaner_delayed_iput_mutex);
> >
> > again = btrfs_clean_one_deleted_snapshot(root);
> > mutex_unlock(&fs_info->cleaner_mutex);
> > @@ -2651,7 +2649,6 @@ int open_ctree(struct super_block *sb,
> > mutex_init(&fs_info->delete_unused_bgs_mutex);
> > mutex_init(&fs_info->reloc_mutex);
> > mutex_init(&fs_info->delalloc_root_mutex);
> > - mutex_init(&fs_info->cleaner_delayed_iput_mutex);
> > seqlock_init(&fs_info->profiles_lock);
> >
> > INIT_LIST_HEAD(&fs_info->dirty_cowonly_roots);
> > @@ -2673,6 +2670,7 @@ int open_ctree(struct super_block *sb,
> > atomic_set(&fs_info->defrag_running, 0);
> > atomic_set(&fs_info->qgroup_op_seq, 0);
> > atomic_set(&fs_info->reada_works_cnt, 0);
> > + atomic_set(&fs_info->nr_delayed_iputs, 0);
> > atomic64_set(&fs_info->tree_mod_seq, 0);
> > fs_info->sb = sb;
> > fs_info->max_inline = BTRFS_DEFAULT_MAX_INLINE;
> > @@ -2750,6 +2748,7 @@ int open_ctree(struct super_block *sb,
> > init_waitqueue_head(&fs_info->transaction_wait);
> > init_waitqueue_head(&fs_info->transaction_blocked_wait);
> > init_waitqueue_head(&fs_info->async_submit_wait);
> > + init_waitqueue_head(&fs_info->delayed_iputs_wait);
> >
> > INIT_LIST_HEAD(&fs_info->pinned_chunks);
> >
> > diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> > index 3a90dc1d6b31..36f43876be22 100644
> > --- a/fs/btrfs/extent-tree.c
> > +++ b/fs/btrfs/extent-tree.c
> > @@ -4272,8 +4272,9 @@ int btrfs_alloc_data_chunk_ondemand(struct btrfs_inode *inode, u64 bytes)
> > * operations. Wait for it to finish so that
> > * more space is released.
> > */
> > - mutex_lock(&fs_info->cleaner_delayed_iput_mutex);
> > - mutex_unlock(&fs_info->cleaner_delayed_iput_mutex);
> > + ret = btrfs_wait_on_delayed_iputs(fs_info);
> > + if (ret)
> > + return ret;
> > goto again;
> > } else {
> > btrfs_end_transaction(trans);
> > @@ -4838,9 +4839,9 @@ static int may_commit_transaction(struct btrfs_fs_info *fs_info,
> > * pinned space, so make sure we run the iputs before we do our pinned
> > * bytes check below.
> > */
> > - mutex_lock(&fs_info->cleaner_delayed_iput_mutex);
> > btrfs_run_delayed_iputs(fs_info);
> > - mutex_unlock(&fs_info->cleaner_delayed_iput_mutex);
> > + wait_event(fs_info->delayed_iputs_wait,
> > + atomic_read(&fs_info->nr_delayed_iputs) == 0);
>
> Why open code btrfs_wait_on_delayed_iputs(fs_info)? Just make it use
> wait_event instead of the killable version and use the function
> uniformally across the code base.
I don't want the flusher thread to be killable, but now that I've said that out
loud I realize that's not possible, so I'll just use the helper. Thanks,
Josef