Re: BTRFS checksum mismatch - false positives

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 24, 2019 at 7:42 AM <hoegge@xxxxxxxxx> wrote:
>
> Sorry forgot root when issuing commands:
>
> ash-4.3# btrfs fi show
> Label: '2016.05.06-09:13:52 v7321'  uuid: 63121c18-2bed-4c81-a514-77be2fba7ab8
> Total devices 1 FS bytes used 4.31TiB
> devid    1 size 9.97TiB used 4.55TiB path /dev/mapper/vg1-volume_1

OK so you can do
# pvs

And that should show what makes up that logical volume. And you can
also double check with

# cat /proc/mdstat


> Data, single: total=4.38TiB, used=4.30TiB
> System, DUP: total=8.00MiB, used=96.00KiB
> System, single: total=4.00MiB, used=0.00B
> Metadata, DUP: total=89.50GiB, used=6.63GiB
> Metadata, single: total=8.00MiB, used=0.00B
> GlobalReserve, single: total=512.00MiB, used=0.00B

Yeah there's a couple of issues there that aren't problems per se. But
with the older kernel, it's probably a good idea to reduce the large
number of unused metadata block groups:

# btrfs balance start -mconvert=dup,soft /mountpoint/    ##no idea
where the mount point is for your btrfs volume

that command will get rid of those empty single profile system and
metadata block groups. It should complete almost instantly.

# btrfs balance start -musage=25 /mountpoint

That will find block groups with 25% or less usage, move and
consolidate their extents into new metadata block groups and then
delete the old ones. 25% is pretty conservative. There's ~89GiB
allocated to metadata, but only ~7GiB is used. So this command will
find the tiny bits of metadata strewn out over those 89GiB and
consolidate them, and basically it'll free up a bunch of space.

It's not really necessary to do this, you've got a ton of free space
left, only 1/2 the pool is used

9.97TiB used 4.55TiB

>
> Synology indicates that BTRFS can do self healing of data using RAID information? Is that really the case if it is not a "BTRFS raid" but a MD or SHR raid?

Btrfs will only self heal the metadata in this file system, because
there's two copies of metadata. It can't do self heal on data. That'd
be up to whatever lower layer is providing the RAID capability and
whether md or lvm based, it depends on the drive itself spitting out
some kind of discrete read or write error in order for md/lvm to know
what to do. There are no checksums available to it, so it has no idea
if the data is corrupt. It only knows if a drive complains, it needs
to attempt reconstruction. If that reconstruction produces corrupt
data, Btrfs still detects it and will report on it, but it can't fix
it.



-- 
Chris Murphy



[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux