On Tue, Oct 15, 2019 at 02:08:31PM +0200, David Sterba wrote: > Hi, > > thanks for working on this. The plain -odiscard hasn't been recommended > to users for a long time, even with the SATA 3.1 drives that allow > queueing the requests. > > The overall approach to async discard sounds good, the hard part is not > shoot down the filesystem by trimming live data, and we had a bug like > that in the pastlive data, and we had a bug like that in the past. For > correctness reasons I understand the size of the patchset. > > On Mon, Oct 07, 2019 at 04:17:31PM -0400, Dennis Zhou wrote: > > I am currently working on tuning the rate at which it discards in the > > background. I am doing this by evaluating other workloads and drives. > > The iops and bps rate limits are fairly aggressive right now as my > > basic survey of a few drives noted that the trim command itself is a > > significant part of the overhead. So optimizing for larger trims is the > > right thing to do. > > We need a sane default behaviour, without the need for knobs and > configuration, so it's great you can have a wide range of samples to > tune it. > Yeah I'm just not quite sure what that is yet. The tricky part is that we don't really get burned by trims until well after we've issued the trims. The feedback loop is not really transparent with reads and writes. > As trim is only a hint, a short delay in processing the requests or > slight ineffectivity should be acceptable, to avoid complications in the > code or interfering with other IO. > > > Persistence isn't supported, so when we mount a filesystem, the block > > groups are read in as dirty and background trim begins. This makes async > > discard more useful for longer running mount points. > > I think this is acceptable. > > Regarding the code, I leave comments to the block group and trim > structures to Josef and will focus more on the low-level and coding > style or changelogs. Sounds good. I'll hopefully post a v2 shortly so that we can get more of that out of the way. Then hopefully a smaller incremental later to figure out the configuration part. Thanks, Dennis
