On 1/16/20 8:54 PM, Qu Wenruo wrote:
On 2020/1/17 上午9:50, Josef Bacik wrote:
On 1/16/20 7:55 PM, Qu Wenruo wrote:
On 2020/1/17 上午12:14, Josef Bacik wrote:
On 1/16/20 1:04 AM, Qu Wenruo wrote:
[...]
Instead of creating a weird error handling case why not just set the
per_profile_avail to 0 on error? This will simply disable overcommit
and we'll flush more. This way we avoid making a weird situation
weirder, and we don't have to worry about returning an error from
calc_one_profile_avail(). Simply say "hey we got enomem, metadata
overcommit is going off" with a btrfs_err_ratelimited() and carry on.
Maybe the next one will succeed and we'll get overcommit turned back
on. Thanks,
Then the next user statfs() get screwed up until next successful update.
Then do a
#define BTRFS_VIRTUAL_IS_FUCKED (u64)-1
and set it to that so statfs can call in itself and re-calculate. Thanks,
Then either we keep the old behavior (inaccurate for
RAID5/6/RAID1C2/C3), or hold chunk_mutex to do the calculation (slow).
Neither looks good enough to me.
The proper error handling still looks better to me.
Either way, we need to revert the device size when we failed in those 4
timings. With or without the patchset.
Doing proper revert not only enhance the existing error handling, but
also makes the per-profile available array sane.
Alright you've convinced me. I'm still not a big fan of it, but it's not the
worst thing in the world. You can add
Reviewed-by: Josef Bacik <josef@xxxxxxxxxxxxxx>
Thanks,
Josef