Re: FS corruption when mounting non-degraded after mounting degraded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 21, 2016 at 3:25 PM, Rian Hunter <rian@xxxxxxxxx> wrote:

>
> Start state: Normally functioning raid6 array. Device FOO intermittently
> fails and requires power cycle to work again. This has happened 25-50
> times in the past with no irresolvable data corruption.

For each drive, what are the results:
# smartctl -l scterc /dev/sdX
# cat /sys/block/sdX/device/timeout





>
> * Unmount raid6 FS
> * Disconnect array.
> * Physically remove device FOO from array, add new device BAR to array.
> * Connect array
> * Mount raid6 array with "-o degraded"
> * Run "btrfs replace start 2 /dev/BAR /mnt"
> * Start VMs on FS
> * Machine freezes (not sure why)
> * Restart machine
> * Mount raid6 array with "-o degraded"
> * Replace job continues automatically
> * Start VMs on FS
> * After an hour: VMs have no started up yet (seeing hung-task
>   warnings in kernel). "btrfs replace status /mnt" shows 0.1% done

Do you have dmesg output that includes sysrq+w at the time of the hung
tasks warnings? That's pretty much always requested by devs in these
cases.


> * Cancel replace: "btrfs replace cancel /mnt"
> * Unmount raid6 FS
> * Disconnect array
> * Physically add device FOO back to array
> * Reconnect array
> * Mount raid6 array normally (no "-o degraded")
> * Run "btrfs replace start 2 /dev/BAR /mnt"

Hmm. Open question if 'btrfs replace cancel' actually marked /dev/BAR
as wiped. If it doesn't, then this 2nd replace start should have
failed unless you had used -f, or you used wipefs -a. If it's not
wiped by any of the above, then I'd expect it's possible things get
messy.



> * Mount raid6 array with "-o degraded"
> * Run "btrfs replace start 2 /dev/BAR /mnt"
> * After an hour: Replace operation was automatically cancelled, lots
>   of "parent transid verify failed" in dmesg again.
> * Run "btrfs scrub," "btrfs scrub status" shows millions of
>   unrecoverable errors


Some others (no devs however) have disagreed with me on this, so take
it with a grain of salt, but I don't understand the rationale of
running scrub on degraded arrays. The first order of business is to
get it non-degraded. If that can't be done, scrub is pointless. Of
course it's going to produce millions of unrecoverable errors. There's
a device missing, so I'd expect many unrecoverable errors during a
degraded scrub.




> * Cancel "btrfs scrub"
> * At this point I'm convinced this FS is in a very broken state and I
>   try to salvage whatever data could have changed since beginning the
>   process.

Agreed. Certainly not reassuring.


> From a black box perspective, this led me to believe that the
> corruption happened during the replace operation after mounting
> normally after first mounting with "-o degraded." Of course,
> knowledge of the internals could easily verify this.

Filesystems are really difficult, so even knowledge of the internals
doesn't guarantee the devs will understand where the problem first
started in this case.


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux