On 2020/1/8 下午11:04, David Sterba wrote: > On Tue, Jan 07, 2020 at 10:13:43AM +0800, Qu Wenruo wrote: >>>> + devices_info = kcalloc(fs_devices->rw_devices, sizeof(*devices_info), >>>> + GFP_NOFS); >>> >>> Calling kcalloc is another potential slowdown, for the statfs >>> considerations. >> >> That's also what we did in statfs() before, so it shouldn't cause extra >> problem. >> Furthermore, we didn't use calc_one_profile_avail() directly in statfs() >> directly. >> >> Although I get your point, and personally speaking the memory allocation >> and extra in-memory device iteration should be pretty fast compared to >> __btrfs_alloc_chunk(). >> >> Thus I don't think this memory allocation would cause extra trouble, >> except the error handling mentioned below. > > Right, current statfs also does allocation via > btrfs_calc_avail_data_space, so it's the same as before. > >> [...] >>>> + ret = calc_per_profile_avail(fs_info); >>> >>> Adding new failure modes >> >> Another solution I have tried is make calc_per_profile_avail() return >> void, ignoring the ENOMEM error, and just set the affected profile to 0 >> available space. >> >> But that method is just delaying ENOMEM, and would cause strange >> pre-profile values until next successful update or mount cycle. >> >> Any idea on which method is less worse? > > Better to return the error than wrong values in this case. As the > numbers are sort of a cache and the mount cycle to get them fixed is not > very user friendly, we need some other way. As this is a global state, a > bit in fs_info::flags can be set and full recalculation attempted at > some point until it succeeds. This would leave the counters stale for > some time but I think still better than if they're suddenly 0. > If can_over_commit() is the only user of this facility, then either an extra indicator or sudden 0 is no problem. As in that case, we just don't over-commit and do extra flush to free meta space. But when statfs() is going to use this facility, either sudden 0 nor indicator is good. Just image seconds before, we still have several TiB free space, and all of a sudden, just several GiB free (from allocated data chunks). User will definitely complain. Thus I still prefer proper error handling, as when we're low on memory, a lot of things can go wrong anyway. Thanks, Qu
