On 08/12/2018 09:19 PM, Scott E. Blomquist wrote: > > Hi All, > > Early this morning there was a power glitch that affected our system. > > The second enclosure went offline but the file system stayed up for a > bit before rebooting and recovering the 2 missing arrays sdb1 and > sdc1. > > When mounting we get.... > > Aug 12 14:52:43 localhost kernel: [ 8536.649270] BTRFS info (device sda1): has skinny extents > Aug 12 14:54:52 localhost kernel: [ 8665.900321] BTRFS error (device sda1): parent transid verify failed on 177443463479296 wanted 2159304 found 2159295 > Aug 12 14:54:52 localhost kernel: [ 8665.985512] BTRFS error (device sda1): parent transid verify failed on 177443463479296 wanted 2159304 found 2159295 > Aug 12 14:54:52 localhost kernel: [ 8666.056845] BTRFS error (device sda1): failed to read block groups: -5 > Aug 12 14:54:52 localhost kernel: [ 8666.254178] BTRFS error (device sda1): open_ctree failed > > We are here... > > # uname -a > Linux localhost 4.17.14-custom #1 SMP Sun Aug 12 11:54:00 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux > > # btrfs --version > btrfs-progs v4.17.1 > > # btrfs filesystem show > Label: none uuid: 8337c837-58cb-430a-a929-7f6d2f50bdbb > Total devices 3 FS bytes used 75.05TiB > devid 1 size 47.30TiB used 42.07TiB path /dev/sda1 > devid 2 size 21.83TiB used 16.61TiB path /dev/sdb1 > devid 3 size 21.83TiB used 16.61TiB path /dev/sdc1 What kind of devices are this? You say enclosure... is it a bunch of disks doing its own RAID, with btrfs on top? Do you have RAID1 metadata on top of that, or single? At least if you go the mkfs route (I read the other replies) then also find out what happened. If your storage is losing data in situations like this while it told btrfs that the data was safe, you're running a dangerous operation. -- Hans van Kranenburg
