On Mon, Aug 05, 2013 at 02:34:30PM +0200, Jan Schmidt wrote:
> Nice try hiding this one in a dedup patch set, but I finally found it :-)
Ahhhh, I didn't mean to ;-)
>
> On Wed, July 31, 2013 at 17:37 (+0200), Liu Bo wrote:
> > So we don't need to do qgroups accounting trick without enabling quota.
> > This reduces my tester's costing time from ~28s to ~23s.
> >
> > Signed-off-by: Liu Bo <bo.li.liu@xxxxxxxxxx>
> > ---
> > fs/btrfs/extent-tree.c | 6 ++++++
> > fs/btrfs/qgroup.c | 6 ++++++
> > 2 files changed, 12 insertions(+), 0 deletions(-)
> >
> > diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> > index 10a5c72..c6612f5 100644
> > --- a/fs/btrfs/extent-tree.c
> > +++ b/fs/btrfs/extent-tree.c
> > @@ -2524,6 +2524,12 @@ int btrfs_delayed_refs_qgroup_accounting(struct btrfs_trans_handle *trans,
> > struct qgroup_update *qgroup_update;
> > int ret = 0;
> >
> > + if (!trans->root->fs_info->quota_enabled) {
> > + if (trans->delayed_ref_elem.seq)
> > + btrfs_put_tree_mod_seq(fs_info, &trans->delayed_ref_elem);
> > + return 0;
> > + }
> > +
> > if (list_empty(&trans->qgroup_ref_list) !=
> > !trans->delayed_ref_elem.seq) {
> > /* list without seq or seq without list */
> > diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
> > index 1280eff..f3e82aa 100644
> > --- a/fs/btrfs/qgroup.c
> > +++ b/fs/btrfs/qgroup.c
> > @@ -1200,6 +1200,9 @@ int btrfs_qgroup_record_ref(struct btrfs_trans_handle *trans,
> > {
> > struct qgroup_update *u;
> >
> > + if (!trans->root->fs_info->quota_enabled)
> > + return 0;
> > +
> > BUG_ON(!trans->delayed_ref_elem.seq);
> > u = kmalloc(sizeof(*u), GFP_NOFS);
> > if (!u)
> > @@ -1850,6 +1853,9 @@ out:
> >
> > void assert_qgroups_uptodate(struct btrfs_trans_handle *trans)
> > {
> > + if (!trans->root->fs_info->quota_enabled)
> > + return;
> > +
> > if (list_empty(&trans->qgroup_ref_list) && !trans->delayed_ref_elem.seq)
> > return;
> > pr_err("btrfs: qgroups not uptodate in trans handle %p: list is%s empty, seq is %#x.%x\n",
> >
>
> The second hunk looks sensible at first sight. However, hunk 1 and 3 don't. They
> assert consistency of qgroup state in well defined places. The fact that you
> need to disable those checks shows that skipping addition to the list in the
> second hunk cannot be right, or at least not sufficient.
I agree, only hunk 2 is necessary.
>
> We've got the list of qgroup operations trans->qgroup_ref_list and we've got the
> qgroup's delayed ref blocker, trans->delayed_ref_elem. If you stop adding to the
> list (hunk 2) which seems reasonable when quota is disabled, then you also must
> ensure you're not acquiring the delayed ref blocker element, which should give
> another performance boost.
WHY a 'must' here?
>
> need_ref_seq may be the right place for this change. It just feels a bit too
> obvious. The critical cases obviously are quota enable and quota disable. I just
> don't recall why it wasn't that way from the very beginning of qgroups, I might
> be missing something fundamental here.
Yeah I thought about 'need_ref_seq', but the point is that delayed ref blocker
not only serves qgroups accounting, but also features based on backref
walking, such as scrub, snapshot-aware defragment.
And I want dedup to work with all these features well, including qgroups
too.
Any ideas?
-liubo
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html