Re: How to Fix 'Error: could not find extent items for root 257'?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Qu

Thanks for your reply. That's really helpful. BTW, I just read this url and
the mail thread in it. https://unix.stackexchange.com/a/345972
It seems to say if raid1 is degraded and even if rw, it should not be applied
any operations other than btrfs-replace or btrfs-balance.

Does it mean the degraded raid1 should not be used with both
btrfs-replace/balance and the original server rw services at the meantime?

For example, I put PostgreSQL DB on btrfs raid1 and I though one of raid1
two copies is my backup. Even if I lost one copy, the service still can keep
running by another one immediately. Okay, maybe not immediately. I need
to reboot. But waiting 24 hours or longer which depends on the size of data
for the completion of btrfs-replace/balance seems not to be a good idea.

Regards,
Chiung-Ming Huang

Regards,
Chiung-Ming Huang


Qu Wenruo <quwenruo.btrfs@xxxxxxx> 於 2020年2月10日 週一 下午3:03寫道:
>
>
>
> On 2020/2/10 下午2:50, Chiung-Ming Huang wrote:
> > Qu Wenruo <quwenruo.btrfs@xxxxxxx> 於 2020年2月7日 週五 下午3:16寫道:
> >>
> >>
> >>
> >> On 2020/2/7 下午2:16, Chiung-Ming Huang wrote:
> >>> Qu Wenruo <quwenruo.btrfs@xxxxxxx> 於 2020年2月7日 週五 下午12:00寫道:
> >>>>
> >>>> All these subvolumes had a missing root dir. That's not good either.
> >>>> I guess btrfs-restore is your last chance, or RO mount with my
> >>>> rescue=skipbg patchset:
> >>>> https://patchwork.kernel.org/project/linux-btrfs/list/?series=170715
> >>>>
> >>>
> >>> Is it possible to use original disks to keep the restored data safely?
> >>> I would like
> >>> to restore the data of /dev/bcache3 to the new btrfs RAID0 at the first and then
> >>> add it to the new btrfs RAID0. Does `btrfs restore` need metadata or something
> >>> in /dev/bcache3 to restore /dev/bcache2 and /dev/bcache4?
> >>
> >> Devid 1 (bcache 2) seems OK to be missing, as all its data and metadata
> >> are in RAID1.
> >>
> >> But it's strongly recommended to test without wiping bcache2, to make
> >> sure btrfs-restore can salvage enough data, then wipeing bcache2.
> >>
> >> Thanks,
> >> Qu
> >
> > Is it possible to shrink the size of bcache2 btrfs without making
> > everything worse?
> > I need more disk space but I still need bcache2 itself.
>
> That is kinda possible, but please keep in mind that, even in the best
> case, it still needs to write some (very small amount) metadata into the
> fs, thus I can't ensure it won't make things worse, or even possible
> without falling back to RO.
>
> You need to dump the device extent tree, to determine the where the last
> dev extent is for each device, then shrink to that size.
>
> Some example here:
>
> # btrfs ins dump-tree -t dev /dev/nvme/btrfs
> ...
>
>         item 6 key (1 DEV_EXTENT 2169503744) itemoff 15955 itemsize 48
>                 dev extent chunk_tree 3
>                 chunk_objectid 256 chunk_offset 2169503744 length 1073741824
>                 chunk_tree_uuid 00000000-0000-0000-0000-000000000000
>
> Here for the key, 1 means devid 1, 2169503744 means where the device
> extent starts at. 1073741824 is the length of the device extent.
>
> In above case, the device with devid 1 can be resized to 2169503744 +
> 1073741824, without relocating any data/metadata.
>
> # time btrfs fi resize 1:3243245568 /mnt/btrfs/
> Resize '/mnt/btrfs/' of '1:3243245568'
>
> real    0m0.013s
> user    0m0.006s
> sys     0m0.004s
>
> And the dump-tree shows the same last device extent:
> ...
>         item 6 key (1 DEV_EXTENT 2169503744) itemoff 15955 itemsize 48
>                 dev extent chunk_tree 3
>                 chunk_objectid 256 chunk_offset 2169503744 length 1073741824
>                 chunk_tree_uuid 00000000-0000-0000-0000-000000000000
>
> (Maybe it's a good time to implement some like fast shrink for btrfs-progs)
>
> Of course, after resizing btrfs, you still need to resize bcache, but
> that's not related to btrfs (and I am not familiar with bcache either).
>
> Thanks,
> Qu
>
> >
> > Regards,
> > Chiung-Ming Huang
> >
> >
> >>>
> >>> /dev/bcache2, ID: 1
> >>>    Device size:             9.09TiB
> >>>    Device slack:              0.00B
> >>>    Data,RAID1:              3.93TiB
> >>>    Metadata,RAID1:          2.00GiB
> >>>    System,RAID1:           32.00MiB
> >>>    Unallocated:             5.16TiB
> >>>
> >>> /dev/bcache3, ID: 3
> >>>    Device size:             2.73TiB
> >>>    Device slack:              0.00B
> >>>    Data,single:           378.00GiB
> >>>    Data,RAID1:            355.00GiB
> >>>    Metadata,single:         2.00GiB
> >>>    Metadata,RAID1:         11.00GiB
> >>>    Unallocated:             2.00TiB
> >>>
> >>> /dev/bcache4, ID: 5
> >>>    Device size:             9.09TiB
> >>>    Device slack:              0.00B
> >>>    Data,single:             2.93TiB
> >>>    Data,RAID1:              4.15TiB
> >>>    Metadata,single:         6.00GiB
> >>>    Metadata,RAID1:         11.00GiB
> >>>    System,RAID1:           32.00MiB
> >>>    Unallocated:             2.00TiB
> >>>
> >>> Regards,
> >>> Chiung-Ming Huang
> >>>
> >>
>




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux