On Mon, Oct 07, 2019 at 04:17:39PM -0400, Dennis Zhou wrote:
> The number of discardable extents will serve as the rate limiting metric
> for how often we should discard. This keeps track of discardable extents
> in the free space caches by maintaining deltas and propagating them to
> the global count.
>
> This also setups up a discard directory in btrfs sysfs and exports the
> total discard_extents count.
>
> Signed-off-by: Dennis Zhou <dennis@xxxxxxxxxx>
> ---
> fs/btrfs/ctree.h | 2 +
> fs/btrfs/discard.c | 2 +
> fs/btrfs/discard.h | 19 ++++++++
> fs/btrfs/free-space-cache.c | 93 ++++++++++++++++++++++++++++++++++---
> fs/btrfs/free-space-cache.h | 2 +
> fs/btrfs/sysfs.c | 33 +++++++++++++
> 6 files changed, 144 insertions(+), 7 deletions(-)
>
> diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
> index c328d2e85e4d..43e515939b9c 100644
> --- a/fs/btrfs/ctree.h
> +++ b/fs/btrfs/ctree.h
> @@ -447,6 +447,7 @@ struct btrfs_discard_ctl {
> spinlock_t lock;
> struct btrfs_block_group_cache *cache;
> struct list_head discard_list[BTRFS_NR_DISCARD_LISTS];
> + atomic_t discard_extents;
> };
>
> /* delayed seq elem */
> @@ -831,6 +832,7 @@ struct btrfs_fs_info {
> struct btrfs_workqueue *scrub_wr_completion_workers;
> struct btrfs_workqueue *scrub_parity_workers;
>
> + struct kobject *discard_kobj;
> struct btrfs_discard_ctl discard_ctl;
>
> #ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
> diff --git a/fs/btrfs/discard.c b/fs/btrfs/discard.c
> index 26a1e44b4bfa..0544eb6717d4 100644
> --- a/fs/btrfs/discard.c
> +++ b/fs/btrfs/discard.c
> @@ -298,6 +298,8 @@ void btrfs_discard_init(struct btrfs_fs_info *fs_info)
>
> for (i = 0; i < BTRFS_NR_DISCARD_LISTS; i++)
> INIT_LIST_HEAD(&discard_ctl->discard_list[i]);
> +
> + atomic_set(&discard_ctl->discard_extents, 0);
> }
>
> void btrfs_discard_cleanup(struct btrfs_fs_info *fs_info)
> diff --git a/fs/btrfs/discard.h b/fs/btrfs/discard.h
> index 22cfa7e401bb..85939d62521e 100644
> --- a/fs/btrfs/discard.h
> +++ b/fs/btrfs/discard.h
> @@ -71,4 +71,23 @@ void btrfs_discard_queue_work(struct btrfs_discard_ctl *discard_ctl,
> btrfs_discard_schedule_work(discard_ctl, false);
> }
>
> +static inline
> +void btrfs_discard_update_discardable(struct btrfs_block_group_cache *cache,
> + struct btrfs_free_space_ctl *ctl)
> +{
> + struct btrfs_discard_ctl *discard_ctl;
> + s32 extents_delta;
> +
> + if (!cache || !btrfs_test_opt(cache->fs_info, DISCARD_ASYNC))
> + return;
> +
> + discard_ctl = &cache->fs_info->discard_ctl;
> +
> + extents_delta = ctl->discard_extents[0] - ctl->discard_extents[1];
> + if (extents_delta) {
> + atomic_add(extents_delta, &discard_ctl->discard_extents);
> + ctl->discard_extents[1] = ctl->discard_extents[0];
> + }
What the actual fuck? I assume you did this to avoid checking DISCARD_ASYNC on
every update, but man this complexity is not worth it. We might as well update
the counter every time to avoid doing stuff like this.
If there's a better reason for doing it this way then I'm all ears, but even so
this is not the way to do it. Just do
atomic_add(ctl->discard_extenst, &discard_ctl->discard_extents);
ctl->discard_extents = 0;
and avoid the two step thing. And a comment, because it was like 5 minutes
between me seeing this and getting to your reasoning, and in between there was a
lot of swearing. Thanks,
Josef