Re: [PATCH v2] btrfs: scrub: Don't check free space before marking a block group RO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2020/1/18 上午9:16, Qu Wenruo wrote:
> 
> 
> On 2020/1/18 上午1:59, Filipe Manana wrote:
>> On Fri, Nov 15, 2019 at 2:11 AM Qu Wenruo <wqu@xxxxxxxx> wrote:
>>>
>>> [BUG]
>>> When running btrfs/072 with only one online CPU, it has a pretty high
>>> chance to fail:
>>>
>>>   btrfs/072 12s ... _check_dmesg: something found in dmesg (see xfstests-dev/results//btrfs/072.dmesg)
>>>   - output mismatch (see xfstests-dev/results//btrfs/072.out.bad)
>>>       --- tests/btrfs/072.out     2019-10-22 15:18:14.008965340 +0800
>>>       +++ /xfstests-dev/results//btrfs/072.out.bad      2019-11-14 15:56:45.877152240 +0800
>>>       @@ -1,2 +1,3 @@
>>>        QA output created by 072
>>>        Silence is golden
>>>       +Scrub find errors in "-m dup -d single" test
>>>       ...
>>>
>>> And with the following call trace:
>>>   BTRFS info (device dm-5): scrub: started on devid 1
>>>   ------------[ cut here ]------------
>>>   BTRFS: Transaction aborted (error -27)
>>>   WARNING: CPU: 0 PID: 55087 at fs/btrfs/block-group.c:1890 btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
>>>   CPU: 0 PID: 55087 Comm: btrfs Tainted: G        W  O      5.4.0-rc1-custom+ #13
>>>   Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
>>>   RIP: 0010:btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
>>>   Call Trace:
>>>    __btrfs_end_transaction+0xdb/0x310 [btrfs]
>>>    btrfs_end_transaction+0x10/0x20 [btrfs]
>>>    btrfs_inc_block_group_ro+0x1c9/0x210 [btrfs]
>>>    scrub_enumerate_chunks+0x264/0x940 [btrfs]
>>>    btrfs_scrub_dev+0x45c/0x8f0 [btrfs]
>>>    btrfs_ioctl+0x31a1/0x3fb0 [btrfs]
>>>    do_vfs_ioctl+0x636/0xaa0
>>>    ksys_ioctl+0x67/0x90
>>>    __x64_sys_ioctl+0x43/0x50
>>>    do_syscall_64+0x79/0xe0
>>>    entry_SYSCALL_64_after_hwframe+0x49/0xbe
>>>   ---[ end trace 166c865cec7688e7 ]---
>>>
>>> [CAUSE]
>>> The error number -27 is -EFBIG, returned from the following call chain:
>>> btrfs_end_transaction()
>>> |- __btrfs_end_transaction()
>>>    |- btrfs_create_pending_block_groups()
>>>       |- btrfs_finish_chunk_alloc()
>>>          |- btrfs_add_system_chunk()
>>>
>>> This happens because we have used up all space of
>>> btrfs_super_block::sys_chunk_array.
>>>
>>> The root cause is, we have the following bad loop of creating tons of
>>> system chunks:
>>> 1. The only SYSTEM chunk is being scrubbed
>>>    It's very common to have only one SYSTEM chunk.
>>> 2. New SYSTEM bg will be allocated
>>>    As btrfs_inc_block_group_ro() will check if we have enough space
>>>    after marking current bg RO. If not, then allocate a new chunk.
>>> 3. New SYSTEM bg is still empty, will be reclaimed
>>>    During the reclaim, we will mark it RO again.
>>> 4. That newly allocated empty SYSTEM bg get scrubbed
>>>    We go back to step 2, as the bg is already mark RO but still not
>>>    cleaned up yet.
>>>
>>> If the cleaner kthread doesn't get executed fast enough (e.g. only one
>>> CPU), then we will get more and more empty SYSTEM chunks, using up all
>>> the space of btrfs_super_block::sys_chunk_array.
>>>
>>> [FIX]
>>> Since scrub/dev-replace doesn't always need to allocate new extent,
>>> especially chunk tree extent, so we don't really need to do chunk
>>> pre-allocation.
>>>
>>> To break above spiral, here we introduce a new parameter to
>>> btrfs_inc_block_group(), @do_chunk_alloc, which indicates whether we
>>> need extra chunk pre-allocation.
>>>
>>> For relocation, we pass @do_chunk_alloc=true, while for scrub, we pass
>>> @do_chunk_alloc=false.
>>> This should keep unnecessary empty chunks from popping up for scrub.
>>>
>>> Also, since there are two parameters for btrfs_inc_block_group_ro(),
>>> add more comment for it.
>>>
>>> Signed-off-by: Qu Wenruo <wqu@xxxxxxxx>
>>
>> Qu,
>>
>> Strangely, this has caused some unexpected failures on test btrfs/071
>> (fsstress + device replace + remount followed by scrub).
> 
> How reproducible?
> 
> I also hit rare csum corruptions in btrfs/06[45] and btrfs/071.
> That from v5.5-rc6 and misc-next.
> 
> In my runs, the reproducibility comes around 1/20 to 1/50.
> 
>>
>> Since I hadn't seen the issue in my 5.4 (and older) based branches,
>> and only started to observe the failure in 5.5-rc2+, I left a vm
>> bisecting since last week after coming back from vacations.
>> The bisection points out to this change. And going to 5.5-rc5 and
>> reverting this change, or just doing:
>>
>> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
>> index 21de630b0730..87478654a3e1 100644
>> --- a/fs/btrfs/scrub.c
>> +++ b/fs/btrfs/scrub.c
>> @@ -3578,7 +3578,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
>>                  * thread can't be triggered fast enough, and use up all space
>>                  * of btrfs_super_block::sys_chunk_array
>>                  */
>> -               ret = btrfs_inc_block_group_ro(cache, false);
>> +               ret = btrfs_inc_block_group_ro(cache, true);
>>                 scrub_pause_off(fs_info);
>>
>>                 if (ret == 0) {
>>
>> which is simpler then reverting due to conflicts, confirms this patch
>> is what causes btrfs/071 to fail like this:
>>
>> $ cat results/btrfs/071.out.bad
>> QA output created by 071
>> Silence is golden
>> Scrub find errors in "-m raid0 -d raid0" test
> 
> In my case, not only raid0 raid0, but also simple simple.
> 
>>
>> $ cat results/btrfs/071.full
>> (...)
>> Test -m raid0 -d raid0
>> Run fsstress  -p 20 -n 100 -d
>> /home/fdmanana/btrfs-tests/scratch_1/stressdir -f rexchange=0 -f
>> rwhiteout=0
>> Start replace worker: 17813
>> Wait for fsstress to exit and kill all background workers
>> seed = 1579455326
>> dev_pool=/dev/sdd /dev/sde /dev/sdf
>> free_dev=/dev/sdg, src_dev=/dev/sdd
>> Replacing /dev/sdd with /dev/sdg
>> Replacing /dev/sde with /dev/sdd
>> Replacing /dev/sdf with /dev/sde
>> Replacing /dev/sdg with /dev/sdf
>> Replacing /dev/sdd with /dev/sdg
>> Replacing /dev/sde with /dev/sdd
>> Replacing /dev/sdf with /dev/sde
>> Replacing /dev/sdg with /dev/sdf
>> Replacing /dev/sdd with /dev/sdg
>> Replacing /dev/sde with /dev/sdd
>> Scrub the filesystem
>> ERROR: there are uncorrectable errors
>> scrub done for 0f63c9b5-5618-4484-b6f2-0b7d3294cf05
>> Scrub started:    Fri Jan 17 12:31:35 2020
>> Status:           finished
>> Duration:         0:00:00
>> Total to scrub:   5.02GiB
>> Rate:             0.00B/s
>> Error summary:    csum=1
>>   Corrected:      0
>>   Uncorrectable:  1
>>   Unverified:     0
>> Scrub find errors in "-m raid0 -d raid0" test
>> (...)
>>
>> And in syslog:
>>
>> (...)
>> Jan  9 13:14:15 debian5 kernel: [1739740.727952] BTRFS info (device
>> sdc): dev_replace from /dev/sde (devid 4) to /dev/sdd started
>> Jan  9 13:14:15 debian5 kernel: [1739740.752226]
>> scrub_handle_errored_block: 8 callbacks suppressed
>> Jan  9 13:14:15 debian5 kernel: [1739740.752228] BTRFS warning (device
>> sdc): checksum error at logical 1129050112 on dev /dev/sde, physical
>> 277803008, root 5, inode 405, offset 1540096, length 4096, links 1
>> (path: stressdir/pa/d2/d5/fa)
> 
Since no clue why this patch is causing problem, I just poking around to
see how it's related.

- It's on-disk data corruption.
  Btrfs check --check-data-csum also reports similar error of the fs.
  So it's not some false alert from scrub.

Considering the effect, I guess it may be worthy to use your quick fix,
or at least do chunk pre-allocation for data chunks.

Thanks,
Qu

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux