Thanks for the quick reply! See responses inline. On Sat, 2019-08-24 at 19:01 +0800, Qu Wenruo wrote: > On 2019/8/24 下午2:48, Patrick Dijkgraaf wrote: > > Hi all, > > > > My server hung this morning, and I had to hard-reset is. I did not > > apply any updates. After the reboot, my FS won't mount: > > > > [Sat Aug 24 08:16:31 2019] BTRFS error (device sde2): > > super_total_bytes > > 92017957797888 mismatch with fs_devices total_rw_bytes > > 184035915595776 > > [Sat Aug 24 08:16:31 2019] BTRFS error (device sde2): failed to > > read > > chunk tree: -22 > > [Sat Aug 24 08:16:31 2019] BTRFS error (device sde2): open_ctree > > failed > > > > However, running btrfs rescue shows: > > root@cornelis ~]# btrfs rescue fix-device-size /dev/sdh2 > > No device size related problem found > > That's strange. > > Would you please dump the chunk tree and super blocks? > # btrfs ins dump-super -fFa /dev/sdh2 See: https://pastebin.com/f5Wn15sx > # btrfs ins dump-tree -t chunk /dev/sdh2 This output is too large for pastebin. The output is viewable/downloadable here: https://kwek.duckstad.net/tree.txt > And, have you tried to mount using different devices? If it's some > super > blocks get corrupted, using a different device to mount may help. > (With that said, it's better to call that dump-super for each device) Tried it with sde and sdh. Both give the same error. > > FS config is shown below: > > [root@cornelis ~]# btrfs fi show > > Label: 'cornelis-btrfs' uuid: ac643516-670e-40f3-aa4c-f329fc3795fd > > Total devices 1 FS bytes used 536.05GiB > > devid 1 size 800.00GiB used 630.02GiB path /dev/mapper/cornelis- > > cornelis--btrfs > > > > Label: 'data' uuid: 43472491-7bb3-418c-b476-874a52e8b2b0 > > Total devices 16 FS bytes used 36.61TiB > > devid 1 size 7.28TiB used 2.65TiB path /dev/sde2 > > devid 2 size 3.64TiB used 2.65TiB path /dev/sdf2 > > devid 3 size 3.64TiB used 2.65TiB path /dev/sdg2 > > devid 4 size 7.28TiB used 2.65TiB path /dev/sdh2 > > devid 5 size 3.64TiB used 2.65TiB path /dev/sdi2 > > devid 6 size 7.28TiB used 2.65TiB path /dev/sdj2 > > devid 7 size 3.64TiB used 2.65TiB path /dev/sdk2 > > devid 8 size 3.64TiB used 2.65TiB path /dev/sdl2 > > devid 9 size 7.28TiB used 2.65TiB path /dev/sdm2 > > devid 10 size 3.64TiB used 2.65TiB path /dev/sdn2 > > devid 11 size 7.28TiB used 2.65TiB path /dev/sdo2 > > devid 12 size 3.64TiB used 2.65TiB path /dev/sdp2 > > devid 13 size 7.28TiB used 2.65TiB path /dev/sdq2 > > devid 14 size 7.28TiB used 2.65TiB path /dev/sdr2 > > devid 15 size 3.64TiB used 2.65TiB path /dev/sds2 > > devid 16 size 3.64TiB used 2.65TiB path /dev/sdt2 > > What's the profile used on so many devices? > RAID10? It's RAID6. I know the risk, although I believe that should be minimal nowadays. > The simplest way to fix it is to just update the Nice teaser! 😉 What should I update? > Thanks, > Qu > > Other info: > > [root@cornelis ~]# uname -r > > 4.18.16-arch1-1-ARCH > > > > I was able to mount is using: > > [root@cornelis ~]# mount -o usebackuproot,ro /dev/sdh2 /mnt/data > > > > Now updating my backup, but I REALLY hope to get this fixed on the > > production server! > >
