On 03/31/2015 08:12 PM, Chris Mason wrote:
On Mon, Mar 30, 2015 at 1:42 PM, Torbjørn <lists@xxxxxxxxxxxxx> wrote:
Hi,
Just a follow up on this report.
The file system in question is a raid1 across 2x320G old Western
Digital WD3200KS.
I janked them out of the server to run a fsck on another computer
(after a proper shutdown).
One of the disks did not get properly detected on the secondary
computer.
Hopefully the fsck of the single disk is still of some value to you.
As you can see, there are several issues with the fs.
The system has occasionally had hard reboots.
The fs does not have any real value for me. Everything worth anything
is backed up.
I'll keep the drive around in case it's of any value for some devs.
As noted before: this (corrupted) fs only get errors when booting
into 4.0-rc5. With 4.0-rc4 or earlier it works as if nothing is wrong.
This is really strange because we also have reports from v3.19 stable
kernels, but none of the btrfs patches between rc4 and rc5 were tagged
for stable.
Can I convince you to hammer a bit more on rc4? I'd like to make sure
it really was a regression introdcued in rc5.
-chris
Perhaps I was a bit unclear. The error is triggered when booting into
rc5. If I reset and try to start up rc4 or earlier, it is still there.
After zero-log I can boot into rc4.
I already did a fresh reinstall to alternative drives. I did not want to
have the corrupted fs as root.
I'm not sure I can get the second disk in the raid1 to work. It was head
crashing when trying to attach it for fsck. If I somehow get it back
online I can do some more testing. Anything in particular?
--
Torbjørn
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html