Re: [PATCH] btrfs: statfs: Don't reset f_bavail if we're over committing metadata space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2020/1/17 上午8:54, Qu Wenruo wrote:
> 
> 
> On 2020/1/16 下午10:29, David Sterba wrote:
>> On Wed, Jan 15, 2020 at 11:41:28AM +0800, Qu Wenruo wrote:
>>> [BUG]
>>> When there are a lot of metadata space reserved, e.g. after balancing a
>>> data block with many extents, vanilla df would report 0 available space.
>>>
>>> [CAUSE]
>>> btrfs_statfs() would report 0 available space if its metadata space is
>>> exhausted.
>>> And the calculation is based on currently reserved space vs on-disk
>>> available space, with a small headroom as buffer.
>>> When there is not enough headroom, btrfs_statfs() will report 0
>>> available space.
>>>
>>> The problem is, since commit ef1317a1b9a3 ("btrfs: do not allow
>>> reservations if we have pending tickets"), we allow btrfs to over commit
>>> metadata space, as long as we have enough space to allocate new metadata
>>> chunks.
>>>
>>> This makes old calculation unreliable and report false 0 available space.
>>>
>>> [FIX]
>>> Don't do such naive check anymore for btrfs_statfs().
>>> Also remove the comment about "0 available space when metadata is
>>> exhausted".
>>
>> This is intentional and was added to prevent a situation where 'df'
>> reports available space but exhausted metadata don't allow to create new
>> inode.
> 
> But this behavior itself is not accurate.
> 
> We have global reservation, which is normally always larger than the
> immediate number 4M.
> 
> So that check will never really be triggered.
> 
> Thus invalidating most of your argument.
> 
> Thanks,
> Qu
> 
>>
>> If it gets removed you are trading one bug for another. With the changed
>> logic in the referenced commit, the metadata exhaustion is more likely
>> but it's also temporary.

Furthermore, the point of the patch is, current check doesn't play well
with metadata over-commit.

If it's before v5.4, I won't touch the check considering it will never
hit anyway.

But now for v5.4, either:
- We over-commit metadata
  Meaning we have unallocated space, nothing to worry

- No more space for over-commit
  But in that case, we still have global rsv to update essential trees.
  Please note that, btrfs should never fall into a status where no files
  can be deleted.

Consider all these, we're no longer able to really hit that case.

So that's why I'm purposing deleting that. I see no reason why that
magic number 4M would still work nowadays.

Thanks,
Qu

>>
>> The overcommit and overestimated reservations make it hard if not
>> impossible to do any accurate calculation in statfs/df. From the
>> usability side, there are 2 options:
>>
>> a) return 0 free, while it's still possible to eg. create files
>> b) return >0 free, but no new file can be created
>>
>> The user report I got was for b) so that's what the guesswork fixes and
>> does a). The idea behind that is that there's really low space, but with
>> the overreservation caused by balance it's not.
>>
>> I don't see a good way out of that which could be solved inside statfs,
>> it only interprets the numbers in the best way under circumstances. We
>> don't have exact reservation, don't have a delta of the
>> reserved-requested (to check how much the reservation is off).
>>
> 

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux