Re: 12 TB btrfs file system on virtual machine broke again

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On 5. Jan 2020, at 16:36, Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:
> 
> On Sun, Jan 5, 2020 at 12:18 PM Christian Wimmer
> <telefonchris@xxxxxxxxxx> wrote:
>> 
>> 
>> 
>>> On 5. Jan 2020, at 15:50, Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:
>>> 
>>> On Sun, Jan 5, 2020 at 7:17 AM Christian Wimmer <telefonchris@xxxxxxxxxx> wrote:
>>>> 
> 
>>>> 2020-01-03T11:30:47.479028-03:00 linux-ze6w kernel: [1297857.324177] sda2: rw=2051, want=532656128, limit=419430400
> 
>> /dev/sda is the hard disc file that holds the  Linux:
>> 
>> #fdisk -l
>> Disk /dev/sda: 256 GiB, 274877906944 bytes, 536870912 sectors
>> Disk model: Suse 15.1-0 SSD
>> Units: sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 4096 bytes
>> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
>> Disklabel type: gpt
>> Disk identifier: 186C0CD6-F3B8-471C-B2AF-AE3D325EC215
>> 
>> Device         Start       End   Sectors  Size Type
>> /dev/sda1       2048     18431     16384    8M BIOS boot
>> /dev/sda2      18432 419448831 419430400  200G Linux filesystem
>> /dev/sda3  532674560 536870878   4196319    2G Linux swap
> 
> 
> Why does the kernel want=532656128 but knows the limit=419430400? The
> limit matches the GPT partition map.
> 
> What do you get for
> 
> btrfs insp dump-s /dev/sda2

Here I have only btrfs-progs version 4.19.1:

linux-ze6w:~ # btrfs version
btrfs-progs v4.19.1 
linux-ze6w:~ # btrfs insp dump-s /dev/sda2
superblock: bytenr=65536, device=/dev/sda2
---------------------------------------------------------
csum_type               0 (crc32c)
csum_size               4
csum                    0x6d9388e2 [match]
bytenr                  65536
flags                   0x1
                        ( WRITTEN )
magic                   _BHRfS_M [match]
fsid                    affdbdfa-7b54-4888-b6e9-951da79540a3
metadata_uuid           affdbdfa-7b54-4888-b6e9-951da79540a3
label
generation              799183
root                    724205568
sys_array_size          97
chunk_root_generation   797617
root_level              1
chunk_root              158835163136
chunk_root_level        0
log_root                0
log_root_transid        0
log_root_level          0
total_bytes             272719937536
bytes_used              106188886016
sectorsize              4096
nodesize                16384
leafsize (deprecated)           16384
stripesize              4096
root_dir                6
num_devices             1
compat_flags            0x0
compat_ro_flags         0x0
incompat_flags          0x163
                        ( MIXED_BACKREF |
                          DEFAULT_SUBVOL |
                          BIG_METADATA |
                          EXTENDED_IREF |
                          SKINNY_METADATA )
cache_generation        799183
uuid_tree_generation    557352
dev_item.uuid           8968cd08-0c45-4aff-ab64-65f979b21694
dev_item.fsid           affdbdfa-7b54-4888-b6e9-951da79540a3 [match]
dev_item.type           0
dev_item.total_bytes    272719937536
dev_item.bytes_used     129973092352
dev_item.io_align       4096
dev_item.io_width       4096
dev_item.sector_size    4096
dev_item.devid          1
dev_item.dev_group      0
dev_item.seek_speed     0
dev_item.bandwidth      0
dev_item.generation     0



> 
> 
>>> This is a virtual drive inside the
>>> guest VM? And is backed by a file on the Promise storage? What about
>>> /dev/sdb? Same thing? You're only having a problem with /dev/sdb,
>>> which contains a Btrfs file system.
>> 
>> Actually I have only a problem with the /dev/sdb which is a hard disc file on my Promise storage. The sda2 complains but boots normally.
> 
> sda2 complains? You mean just the previously mentioned FITRIM I/O
> failures? Or there's more?

Only what I found in the previously mentioned messages. Nothing else.

> 
> 
>> 
>> Regarding any logs. Which log files I should look at and how to display them?
>> I looked at the /var/log/messages but did not find any related information.
> 
> Start with
> 
> systemctl status fstrim.timer
> systemctl status fstrim.service

linux-ze6w:~ # systemctl status fstrim.timer
● fstrim.timer - Discard unused blocks once a week
   Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
   Active: active (waiting) since Sun 2020-01-05 15:24:59 -03; 1h 19min ago
  Trigger: Mon 2020-01-06 00:00:00 -03; 7h left
     Docs: man:fstrim

Jan 05 15:24:59 linux-ze6w systemd[1]: Started Discard unused blocks once a week.

linux-ze6w:~ # systemctl status fstrim.service
● fstrim.service - Discard unused blocks on filesystems from /etc/fstab
   Loaded: loaded (/usr/lib/systemd/system/fstrim.service; static; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:fstrim(8)
linux-ze6w:~ # 

> 
> Find the location of the fstrim.service file and cat it, and post that
> too. I want to know exactly what fstrim options it's using. Older
> versions try to trim all file systems.

linux-ze6w:~ # cat /usr/lib/systemd/system/fstrim.service
[Unit]
Description=Discard unused blocks on filesystems from /etc/fstab
Documentation=man:fstrim(8)

[Service]
Type=oneshot
ExecStart=/usr/sbin/fstrim -Av
linux-ze6w:~ # 



> 
> journalctl --since=-8d | grep fstrim


journalctl --since=-8d

this command shows only the messages from today and there is no fstrim inside

Chris





[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux