Re: Unusual crash -- data rolled back ~2 weeks?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




----- Original Message -----
> From: "Qu Wenruo" <quwenruo.btrfs@xxxxxxx>
> To: "Timothy Pearson" <tpearson@xxxxxxxxxxxxxxxxxxxxx>
> Cc: "linux-btrfs" <linux-btrfs@xxxxxxxxxxxxxxx>
> Sent: Sunday, November 10, 2019 6:54:55 AM
> Subject: Re: Unusual crash -- data rolled back ~2 weeks?

> On 2019/11/10 下午2:47, Timothy Pearson wrote:
>> 
>> 
>> ----- Original Message -----
>>> From: "Qu Wenruo" <quwenruo.btrfs@xxxxxxx>
>>> To: "Timothy Pearson" <tpearson@xxxxxxxxxxxxxxxxxxxxx>, "linux-btrfs"
>>> <linux-btrfs@xxxxxxxxxxxxxxx>
>>> Sent: Saturday, November 9, 2019 9:38:21 PM
>>> Subject: Re: Unusual crash -- data rolled back ~2 weeks?
>> 
>>> On 2019/11/10 上午6:33, Timothy Pearson wrote:
>>>> We just experienced a very unusual crash on a Linux 5.3 file server using NFS to
>>>> serve a BTRFS filesystem.  NFS went into deadlock (D wait) with no apparent
>>>> underlying disk subsystem problems, and when the server was hard rebooted to
>>>> clear the D wait the BTRFS filesystem remounted itself in the state that it was
>>>> in approximately two weeks earlier (!).
>>>
>>> This means during two weeks, the btrfs is not committed.
>> 
>> Is there any hope of getting the data from that interval back via btrfs-recover
>> or a similar tool, or does the lack of commit mean the data was stored in RAM
>> only and is therefore gone after the server reboot?
> 
> If it's deadlock preventing new transaction to be committed, then no
> metadata is even written back to disk, so no way to recover metadata.
> Maybe you can find some data written, but without metadata it makes no
> sense.

OK, I'll just assume the data written in that window is unrecoverable at this point then.

Would the commit deadlock affect only one btrfs filesystem or all of them on the machine?  I take it there is no automatic dmesg spew on extended deadlock?  dmesg was completely clean at the time of the fault / reboot.

>> 
>> If the latter, I'm somewhat surprised given the I/O load on the disk array in
>> question, but it would also offer a clue as to why it hard locked the
>> filesystem eventually (presumably on memory exhaustion -- the server has
>> something like 128GB of RAM, so it could go quite a while before hitting the
>> physical RAM limits).
>> 
>>>
>>>>  There was also significant corruption of certain files (e.g. LDAP MDB and MySQL
>>>>  InnoDB) noted -- we restored from backup for those files, but are concerned
>>>>  about the status of the entire filesystem at this point.
>>>
>>> Btrfs check is needed to ensure no metadata corruption.
>>>
>>> Also, we need sysrq+w output to determine where we are deadlocking.
>>> Otherwise, it's really hard to find any clue from the report.
>> 
>> It would have been gathered if we'd known the filesystem was in this bad state.
>> At the time, the priority was on restoring service and we had assumed NFS had
>> just wedged itself (again).  It was only after reboot and remount that the
>> damage slowly came to light.
>> 
>> Do the described symptoms (what little we know of them at this point) line up
>> with the issues fixed by https://patchwork.kernel.org/patch/11141559/ ?  Right
>> now we're hoping that this particular issue was fixed by that series, but if
>> not we might consider increasing backup frequency to nightly for this
>> particular array and seeing if it happens again.
> 
> That fix is already in v5.3, thus I don't think that's the case.
> 
> Thanks,
> Qu

Looking more carefully, the server in question had been booted on 5.3-rc3 somehow.  It's possible that this was because earlier versions were showing driver problems with the other hardware, but somehow this machine was running 5.3-rc3 and the patch was created *after* rc3 release.

Thanks!




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux