Re: Feature requests: online backup - defrag - change RAID level

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2019-09-12 18:57, General Zed wrote:

Quoting Chris Murphy <lists@xxxxxxxxxxxxxxxxx>:

On Thu, Sep 12, 2019 at 3:34 PM General Zed <general-zed@xxxxxxxxx> wrote:


Quoting Chris Murphy <lists@xxxxxxxxxxxxxxxxx>:

> On Thu, Sep 12, 2019 at 1:18 PM <webmaster@xxxxxxxxx> wrote:
>>
>> It is normal and common for defrag operation to use some disk space
>> while it is running. I estimate that a reasonable limit would be to
>> use up to 1% of total partition size. So, if a partition size is 100
>> GB, the defrag can use 1 GB. Lets call this "defrag operation space".
>
> The simplest case of a file with no shared extents, the minimum free
> space should be set to the potential maximum rewrite of the file, i.e.
> 100% of the file size. Since Btrfs is COW, the entire operation must
> succeed or fail, no possibility of an ambiguous in between state, and
> this does apply to defragment.
>
> So if you're defragging a 10GiB file, you need 10GiB minimum free
> space to COW those extents to a new, mostly contiguous, set of exents,

False.

You can defragment just 1 GB of that file, and then just write out to
disk (in new extents) an entire new version of b-trees.
Of course, you don't really need to do all that, as usually only a
small part of the b-trees need to be updated.

The `-l` option allows the user to choose a maximum amount to
defragment. Setting up a default defragment behavior that has a
variable outcome is not idempotent and probably not a good idea.

We are talking about a future, imagined defrag. It has no -l option yet, as we haven't discussed it yet.

As for kernel behavior, it presumably could defragment in portions,
but it would have to completely update all affected metadata after
each e.g. 1GiB section, translating into 10 separate rewrites of file
metadata, all affected nodes, all the way up the tree to the super.
There is no such thing as metadata overwrites in Btrfs. You're
familiar with the wandering trees problem?

No, but it doesn't matter.
No, it does matter. Each time you update metadata, you have to update _the entire tree up to the tree root_. Even if you batch your updates, you still have to propagate the update all the way up to the root of the tree.

At worst, it just has to completely write-out "all metadata", all the way up to the super. It needs to be done just once, because what's the point of writing it 10 times over? Then, the super is updated as the final commit.

On my comouter the ENTIRE METADATA is 1 GB. That would be very tolerable and doable.
You sound like you're dealing with a desktop use case. It's not unusual for very large arrays (double digit TB or larger) to have metadata well into the hundreds of GB. Hell, I've got a 200GB volume with bunches of small files that's got almost 5GB of metadata space used.

But that is a very bad case, because usually not much metadata has to be updated or written out to disk.


So, there is no problem.






[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux