On 1/14/20 12:56 PM, David Sterba wrote:
On Fri, Jan 10, 2020 at 11:11:24AM -0500, Josef Bacik wrote:
While running xfstests with compression on I noticed I was panicing on
btrfs/154. I bisected this down to my inc_block_group_ro patches, which
was strange.
Do you have stacktrace of the panic?
I don't have it with me, I can reproduce when I get back. But it's a
BUG_ON(ret) in init_reloc_root when we do the copy_root, because we get an
ENOSPC when trying to allocate the tree block.
What was happening is with my patches we now use btrfs_can_overcommit()
to see if we can flip a block group read only. Before this would fail
because we weren't taking into account the usable un-allocated space for
allocating chunks. With my patches we were allowed to do the balance,
which is technically correct.
What patches does "my patches" mean?
The ones that convert the inc_block_group_ro() to use btrfs_can_overcommit().
However this test is testing restriping with a degraded mount, something
that isn't working right because Anand's fix for the test was never
actually merged.
Which patch is that?
It says in the header of btrfs/154. I don't have xfstests in front of me right now.
So now we're trying to allocate a chunk and cannot because we want to
allocate a RAID1 chunk, but there's only 1 device that's available for
usage. This results in an ENOSPC in one of the BUG_ON(ret) paths in
relocation (and a tricky path that is going to take many more patches to
fix.)
But we shouldn't even be making it this far, we don't have enough
devices to restripe. The problem is we're using btrfs_num_devices(),
which for some reason includes missing devices. That's not actually
what we want, we want the rw_devices.
The wrapper btrfs_num_devices takes into account an ongoing replace that
temporarily increases num_devices, so the result returned to balance is
adjusted.
That we need to know the correct number of writable devices at this
point is right. With btrfs_num_devices we'd have to subtract missing
devices, but in the end we can't use more than rw_devices.
Fix this by getting the rw_devices. With this patch we're no longer
panicing with my other patches applied, and we're in fact erroring out
at the correct spot instead of at inc_block_group_ro. The fact that
this was working before was just sheer dumb luck.
Fixes: e4d8ec0f65b9 ("Btrfs: implement online profile changing")
Signed-off-by: Josef Bacik <josef@xxxxxxxxxxxxxx>
---
fs/btrfs/volumes.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 7483521a928b..a92059555754 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -3881,7 +3881,14 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
}
}
- num_devices = btrfs_num_devices(fs_info);
+ /*
+ * rw_devices can be messed with by rm_device and device replace, so
+ * take the chunk_mutex to make sure we have a relatively consistent
+ * view of the fs at this point.
Well, what does 'relatively consistent' mean here? There are enough
locks and exclusion that device remove or replace should not change the
value until btrfs_balance ends, no?
Again I don't have the code in front of me, but there's nothing at this point to
stop us from running in at the tail end of device replace or device rm. The
mutex keeps us from getting weirdly inflated values when we increment and
decrement at the end of device replace, but there's nothing (that I can
remember) that will stop rw devices from changing right after we check it, thus
relatively. Thanks,
Josef