Re: Unexpected raid1 behaviour

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 18, 2017 at 3:28 PM, Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:
> On Mon, Dec 18, 2017 at 1:49 AM, Anand Jain <anand.jain@xxxxxxxxxx> wrote:
>
>>  Agreed. IMO degraded-raid1-single-chunk is an accidental feature
>>  caused by [1], which we should revert back, since..
>>    - balance (to raid1 chunk) may fail if FS is near full
>>    - recovery (to raid1 chunk) will take more writes as compared
>>      to recovery under degraded raid1 chunks
>
>
> The advantage of writing single chunks when degraded, is in the case
> where a missing device returns (is readded, intact). Catching up that
> device with the first drive, is a manual but simple invocation of
> 'btrfs balance start -dconvert=raid1,soft -mconvert=raid1,soft'   The
> alternative is a full balance or full scrub. It's pretty tedious for
> big arrays.
>
> mdadm uses bitmap=internal for any array larger than 100GB for this
> reason, avoiding full resync.
>
> 'btrfs sub find' will list all *added* files since an arbitrarily
> specified generation; but not deletions.

Looks like LVM raid types (the non-legacy ones that use md driver)
also use a bitmap by default.

-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux