On 2020/1/11 上午12:11, Josef Bacik wrote:
> For some reason we've translated the do_chunk_alloc that goes into
> btrfs_inc_block_group_ro to force in inc_block_group_ro, but these are
> two different things.
>
> force for inc_block_group_ro is used when we are forcing the block group
> read only no matter what, for example when the underlying chunk is
> marked read only. We need to not do the space check here as this block
> group needs to be read only.
>
> btrfs_inc_block_group_ro() has a do_chunk_alloc flag that indicates that
> we need to pre-allocate a chunk before marking the block group read
> only. This has nothing to do with forcing, and in fact we _always_ want
> to do the space check in this case, so unconditionally pass false for
> force in this case.
>
> Then fixup inc_block_group_ro to honor force as it's expected and
> documented to do.
>
> Signed-off-by: Josef Bacik <josef@xxxxxxxxxxxxxx>
> Reviewed-by: Nikolay Borisov <nborisov@xxxxxxxx>
It looks like my previous comment was on a development branch which we
skip chunk allocation for scrub.
But since it's not upstreamed yet, no need to bother.
Reviewed-by: Qu wenruo <wqu@xxxxxxxx>
Thanks,
Qu
> ---
> fs/btrfs/block-group.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
> index 6f564e390153..2e94e14e30ee 100644
> --- a/fs/btrfs/block-group.c
> +++ b/fs/btrfs/block-group.c
> @@ -1190,8 +1190,15 @@ static int inc_block_group_ro(struct btrfs_block_group *cache, int force)
> spin_lock(&sinfo->lock);
> spin_lock(&cache->lock);
>
> - if (cache->ro) {
> + if (cache->ro || force) {
> cache->ro++;
> +
> + /*
> + * We should only be empty if we did force here and haven't
> + * already marked ourselves read only.
> + */
> + if (force && list_empty(&cache->ro_list))
> + list_add_tail(&cache->ro_list, &sinfo->ro_bgs);
> ret = 0;
> goto out;
> }
>