Re: 12 TB btrfs file system on virtual machine broke again

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jan 5, 2020 at 12:48 PM Christian Wimmer
<telefonchris@xxxxxxxxxx> wrote:
>
>
> #fdisk -l
> Disk /dev/sda: 256 GiB, 274877906944 bytes, 536870912 sectors
> Disk model: Suse 15.1-0 SSD
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disklabel type: gpt
> Disk identifier: 186C0CD6-F3B8-471C-B2AF-AE3D325EC215
>
> Device         Start       End   Sectors  Size Type
> /dev/sda1       2048     18431     16384    8M BIOS boot
> /dev/sda2      18432 419448831 419430400  200G Linux filesystem
> /dev/sda3  532674560 536870878   4196319    2G Linux swap



> btrfs insp dump-s /dev/sda2
>
>
> Here I have only btrfs-progs version 4.19.1:
>
> linux-ze6w:~ # btrfs version
> btrfs-progs v4.19.1
> linux-ze6w:~ # btrfs insp dump-s /dev/sda2
> superblock: bytenr=65536, device=/dev/sda2
> ---------------------------------------------------------
> csum_type               0 (crc32c)
> csum_size               4
> csum                    0x6d9388e2 [match]
> bytenr                  65536
> flags                   0x1
>                         ( WRITTEN )
> magic                   _BHRfS_M [match]
> fsid                    affdbdfa-7b54-4888-b6e9-951da79540a3
> metadata_uuid           affdbdfa-7b54-4888-b6e9-951da79540a3
> label
> generation              799183
> root                    724205568
> sys_array_size          97
> chunk_root_generation   797617
> root_level              1
> chunk_root              158835163136
> chunk_root_level        0
> log_root                0
> log_root_transid        0
> log_root_level          0
> total_bytes             272719937536
> bytes_used              106188886016
> sectorsize              4096
> nodesize                16384
> leafsize (deprecated)           16384
> stripesize              4096
> root_dir                6
> num_devices             1
> compat_flags            0x0
> compat_ro_flags         0x0
> incompat_flags          0x163
>                         ( MIXED_BACKREF |
>                           DEFAULT_SUBVOL |
>                           BIG_METADATA |
>                           EXTENDED_IREF |
>                           SKINNY_METADATA )
> cache_generation        799183
> uuid_tree_generation    557352
> dev_item.uuid           8968cd08-0c45-4aff-ab64-65f979b21694
> dev_item.fsid           affdbdfa-7b54-4888-b6e9-951da79540a3 [match]
> dev_item.type           0
> dev_item.total_bytes    272719937536
> dev_item.bytes_used     129973092352
> dev_item.io_align       4096
> dev_item.io_width       4096
> dev_item.sector_size    4096
> dev_item.devid          1
> dev_item.dev_group      0
> dev_item.seek_speed     0
> dev_item.bandwidth      0
> dev_item.generation     0

Partition map says
> /dev/sda2      18432 419448831 419430400  200G Linux filesystem

Btrfs super says
> total_bytes             272719937536

272719937536*512=532656128

Kernel FITRIM want is want=532656128

OK so the problem is the Btrfs super isn't set to the size of the
partition. The usual way this happens is user error: partition is
resized (shrink) without resizing the file system first. This file
system is still at risk of having problems even if you disable
fstrim.timer. You need to shrink the file system is the same size as
the partition.



> linux-ze6w:~ # systemctl status fstrim.timer
> ● fstrim.timer - Discard unused blocks once a week
>    Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
>    Active: active (waiting) since Sun 2020-01-05 15:24:59 -03; 1h 19min ago
>   Trigger: Mon 2020-01-06 00:00:00 -03; 7h left
>      Docs: man:fstrim
>
> Jan 05 15:24:59 linux-ze6w systemd[1]: Started Discard unused blocks once a week.
>
> linux-ze6w:~ # systemctl status fstrim.service
> ● fstrim.service - Discard unused blocks on filesystems from /etc/fstab
>    Loaded: loaded (/usr/lib/systemd/system/fstrim.service; static; vendor preset: disabled)
>    Active: inactive (dead)
>      Docs: man:fstrim(8)
> linux-ze6w:~ #

OK so it's not set to run. Why do you have FITRIM being called?

What are the mount options for this file system?

> this command shows only the messages from today and there is no fstrim inside

Something else is calling FITRIM.

-- 
Chris Murphy




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux