On wed, 6 Mar 2013 22:06:50 -0500, Chris Mason wrote: > On Wed, Mar 06, 2013 at 06:39:30PM -0700, Miao Xie wrote: >> On wed, 6 Mar 2013 09:53:28 -0500, Chris Mason wrote: >> [SNIP] >>> + async_work->delayed_root = delayed_root; >>> + async_work->work.func = btrfs_async_run_delayed_root; >>> + async_work->work.flags = 0; >>> + if (nr) >>> + async_work->nr = 0; >>> + else >>> + async_work->nr = nr; >> >> the code here is wrong. >> the argument nr is the number we want to deal with, if it is 0, we will deal with all. > > Whoops, thanks. I missed that when I was cleaning things up. > >>> >>> - btrfs_wq_run_delayed_node(delayed_root, root, 0); >>> + btrfs_wq_run_delayed_node(delayed_root, root, BTRFS_DELAYED_BATCH); >>> } >> >> There is a problem that we may introduce lots of btrfs_works, we need avoid >> it. > > It is possible, but we won't make more than we used to. The real > solution is to limit the workers per root, but the code isn't currently > structured for that. Right now the workers will exit out if the number > of pending items is below the delayed limit, which isn't perfect but I > think it's the best I can do right now. > > Do you see better ways to improve it? How do you think about per-cpu btrfs_work? If btrfs_work on the current cpu is dealt with, we don't queue it, just update ->nr if need and tell the workers that we need do flush again. (This way is a bit ugly because btrfs_work might not be handled on its cpu) Thanks Miao > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
