Re: 12 TB btrfs file system on virtual machine broke again

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jan 5, 2020 at 12:18 PM Christian Wimmer
<telefonchris@xxxxxxxxxx> wrote:
>
>
>
> > On 5. Jan 2020, at 15:50, Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:
> >
> > On Sun, Jan 5, 2020 at 7:17 AM Christian Wimmer <telefonchris@xxxxxxxxxx> wrote:
> >>

> >> 2020-01-03T11:30:47.479028-03:00 linux-ze6w kernel: [1297857.324177] sda2: rw=2051, want=532656128, limit=419430400

> /dev/sda is the hard disc file that holds the  Linux:
>
> #fdisk -l
> Disk /dev/sda: 256 GiB, 274877906944 bytes, 536870912 sectors
> Disk model: Suse 15.1-0 SSD
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disklabel type: gpt
> Disk identifier: 186C0CD6-F3B8-471C-B2AF-AE3D325EC215
>
> Device         Start       End   Sectors  Size Type
> /dev/sda1       2048     18431     16384    8M BIOS boot
> /dev/sda2      18432 419448831 419430400  200G Linux filesystem
> /dev/sda3  532674560 536870878   4196319    2G Linux swap


Why does the kernel want=532656128 but knows the limit=419430400? The
limit matches the GPT partition map.

What do you get for

btrfs insp dump-s /dev/sda2


> > This is a virtual drive inside the
> > guest VM? And is backed by a file on the Promise storage? What about
> > /dev/sdb? Same thing? You're only having a problem with /dev/sdb,
> > which contains a Btrfs file system.
>
> Actually I have only a problem with the /dev/sdb which is a hard disc file on my Promise storage. The sda2 complains but boots normally.

sda2 complains? You mean just the previously mentioned FITRIM I/O
failures? Or there's more?


>
> Regarding any logs. Which log files I should look at and how to display them?
> I looked at the /var/log/messages but did not find any related information.

Start with

systemctl status fstrim.timer
systemctl status fstrim.service

Find the location of the fstrim.service file and cat it, and post that
too. I want to know exactly what fstrim options it's using. Older
versions try to trim all file systems.

journalctl --since=-8d | grep fstrim

You don't have to post that output but you should see if fstrim has
been called on /dev/sdb any time in the past 8 days. By default
fstrim.timer if enabled, runs once per week.


-- 
Chris Murphy



[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux