On 16.03.2011 23:07, Andi Kleen wrote:
Arne Jansen<sensille@xxxxxxx> writes:
+ */
+ mutex_lock(&fs_info->scrub_lock);
+ atomic_inc(&fs_info->scrubs_running);
+ mutex_unlock(&fs_info->scrub_lock);
It seems odd to protect an atomic_inc with a mutex.
Is that done for some side effect? Otherwise you either
don't need atomic or don't need the lock.
The reason it is atomic is because it is checked inside a wait_event,
where I can't hold a lock. The mutex is there to protect the check
in btrfs_scrub_pause and btrfs_scrub_cancel. But, now that I think
of it, there is still a race condition left. I'll rethink the locking there
and see if I can eliminate some of the mutex_locks.
That seems to be all over the source file.
+int btrfs_scrub_pause(struct btrfs_root *root)
+{
+ struct btrfs_fs_info *fs_info = root->fs_info;
+ mutex_lock(&fs_info->scrub_lock);
As I understand it you take that mutex on every transaction
commit, which is a fast path for normal IO.
A transaction commit only happens every 30 seconds. At this point,
all outstanding data gets flushed and the super blocks written. I only
pause the scrub in a very late phase during commit. At this point,
the commit is already single threaded.
Apart from that you can be sure that scrub will have an impact on the
performance, as it keeps the disks at 100% busy.
To mitigate this, all scrub activity happens inside ioctls. The idea is that
this way the user can control the impact of the scrub using ionice.
--Arne
For me that looks like a scalability problem with enough
cores. Did you do any performance testing of this on a system
with a reasonable number of cores?
btrfs already has enough scalability problems, please don't
add new ones.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html