Re: Slow performance with Btrfs RAID 10 with a failed disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Austin S. Hemmelgarn <ahferroin7@xxxxxxxxx> writes:

> On 2019-11-27 03:36, Christopher Baines wrote:
>> Hey,
>>
>> I'm using RAID 10, and one of the disks has recently failed [1], and I'm
>> seeing plenty of warning and errors in the dmesg output [2].
>>
>> What kind of performance should be expected from Btrfs when a disk has
>> failed? [3] At the moment, the system seems very slow. One contributing
>> factor may be that all the logging that Btrfs is generating is being
>> written to the btrfs filesystem that's degraded, probably causing more
>> log messages to be produced.
>>
>> I guess that replacing the failed disk is the long term solution to get
>> the filesystem back in to proper operation, but is there anything else
>> that can be done to get it back operating until then?
>>
>> Also, is there anything that can stop btrfs logging so much about the
>> failures, now that I know that a disk has failed?
>
> You can solve both problems by replacing the disc, or if possible,
> just removing it from the array. You should, in theory, be able to
> convert to regular raid1 and then remove the failed disc, though it
> will likely take a while. Given your output below, I'd actually drop
> /dev/sdb as well, and look at replacing both with a single 1TB disc
> like your other three.
>
> The issue here is that BTRFS doesn't see the disc as failed, so it
> keeps trying to access it. That's what's slowing things down (because
> it eventually times out on the access attempt) and why it's logging so
> much (because BTRFS logs every IO error it encounters (like it
> should)).

Thanks for the tips :)

I've now remounted the filesystem with the degraded flag.

However, I haven't managed to remove the disk from the array yet.

$ sudo btrfs filesystem show /
Label: none  uuid: 620115c7-89c7-4d79-a0bb-4957057d9991
	Total devices 6 FS bytes used 1.08TiB
	devid    1 size 72.70GiB used 72.70GiB path /dev/sda3
	devid    2 size 72.70GiB used 72.70GiB path /dev/sdb3
	devid    3 size 931.48GiB used 530.73GiB path /dev/sdc
	devid    4 size 931.48GiB used 530.73GiB path /dev/sdd
	devid    5 size 931.48GiB used 530.73GiB path /dev/sde
	*** Some devices missing

$ sudo btrfs device delete missing /
ERROR: error removing device 'missing': no missing devices found to remove


So Btrfs knows at some level that a device is missing, from the output
of the first command, but it won't delete the missing device.

Am I missing something?

Thanks,

Chris

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux