On 23.12.19 г. 0:11 ч., Chris Murphy wrote: > On Sun, Dec 22, 2019 at 12:15 PM Roman Mamedov <rm@xxxxxxxxxxx> wrote: >> >> On Sun, 22 Dec 2019 20:06:57 +0200 >> Nikolay Borisov <nborisov@xxxxxxxx> wrote: >> >>> Well, if we rework how fitrim is implemented - e.g. make discards async >>> and have some sort of locking to exclude queued extents being allocated >>> we can alleviate the problem somewhat. >> >> Please keep fstrim synchronous, in many cases TRIM is expected to be completed >> as it returns, for the next step of making a snapshot of a thin LV for backup, >> to shutdown a VM for migration, and so on. > > XFS already does async discards. What's the effect of FIFREEZE on > discards? An LV snapshot freezes the file system on the LV just prior > to the snapshot. Actually, XFS issues synchronous discards for the FITRIM ioctl i.e xfs_trim_extents calls blkdev_issue_discard same as with BTRFS. And Dennis' patches implement async runtime discards (which is what XFS is using by default). > >> I don't think many really care about how long fstrim takes, it's not a typical >> interactive end-user task. > > I only care if I notice it affecting user space (excepting my timed > use of fstrim for testing). > > Speculation: If a scheduled fstrim can block startup, that's not OK. I > don't have enough data to know if it's possible, let alone likely. But > when fstrim takes a minute to discard the unused blocks in only 51GiB > of used block groups (likely highly fragmented free space), and only a > fraction of a second to discard the unused block *groups*, I'm > suspicious startup delays may be possible. If it takes that long then it's the drive's implementaiton at fault. Whatever we do in software we will only masking the latency, which might be workable solution for some but not for others. > > Found this, from 2019 LSFMM > https://lwn.net/Articles/787272/ > > >
