In some case, reada background works exit before all extents read,
for example, if a device reachs workload limit(MAX_IN_FLIGHT),
or total reads reachs max limit.
In old code, every works queue 2x new works, so many works make
above problem rarely happened.
After we limit max works by patch titled:
btrfs: reada: limit max works count
The chance of above problem invreased.
Fix:
We can check running background works in btrfs_reada_wait(),
and create one work if no works exist.
Note:
1: It is patch for debug, discussed on following thread in maillist:
Re: [PATCH 1/2] btrfs: reada: limit max works count
I havn't reproduce problem in above mail until now, this patch is
created by reviewing code.
And I also havn't reproduced the problem before patch.
I only comfirmed no-problem after this patch applied.
2: It is based on patch named:
btrfs: reada: limit max works count
The above patch and some detail in this patch needs more
improvement before applied.
Signed-off-by: Zhao Lei <zhaolei@xxxxxxxxxxxxxx>
---
fs/btrfs/reada.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/fs/btrfs/reada.c b/fs/btrfs/reada.c
index af1e7b6..e67ce05 100644
--- a/fs/btrfs/reada.c
+++ b/fs/btrfs/reada.c
@@ -957,6 +957,8 @@ int btrfs_reada_wait(void *handle)
struct reada_control *rc = handle;
while (atomic_read(&rc->elems)) {
+ if (!atomic_read(&works_cnt))
+ reada_start_machine(rc->root->fs_info);
wait_event_timeout(rc->wait, atomic_read(&rc->elems) == 0,
5 * HZ);
dump_devs(rc->root->fs_info,
@@ -975,6 +977,8 @@ int btrfs_reada_wait(void *handle)
struct reada_control *rc = handle;
while (atomic_read(&rc->elems)) {
+ if (!atomic_read(&works_cnt))
+ reada_start_machine(rc->root->fs_info);
wait_event(rc->wait, atomic_read(&rc->elems) == 0);
}
--
1.8.5.1
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html