On 2019-02-15 10:40, Brian B wrote:
It looks like the btrfs code currently uses the total space available on
a disk to determine where it should place the two copies of a file in
RAID1 mode. Wouldn't it make more sense to use the _percentage_ of free
space instead of the number of free bytes?
For example, I have two disks in my array that are 8 TB, plus an
assortment of 3,4, and 1 TB disks. With the current allocation code,
btrfs will use my two 8 TB drives exclusively until I've written 4 TB of
files, then it will start using the 4 TB disks, then eventually the 3,
and finally the 1 TB disks. If the code used a percentage figure
instead, it would spread the allocations much more evenly across the
drives, ideally spreading load and reducing drive wear.
Is there a reason this is done this way, or is it just something that
hasn't had time for development?
It's simple to implement, easy to verify, runs fast, produces optimal or
near optimal space usage in pretty much all cases, and is highly
deterministic.
Using percentages reduces the simplicity, ease of verification, and
speed (division is still slow on most CPU's, and you need division for
percentages), and is likely to not be as deterministic (both because the
choice of first devices is harder when all are 100% empty, and because
of potential rounding errors), and probably won't produce optimal
layouts quite as reliably (you either need to get into floating-point
math (which is to be avoided in the kernel whenever possible), or you
end up with much more quantized disk selection).
I could see an adapted percentage method that preferentially spreads
across disks whenever possible _possibly_ making sense once we can
properly parallelize disk access in BTRFS, but until then I see no
reason to change something that is already working reasonably well.
In your particular case, I'd actually suggest using something under
BTRFS to merge the smaller disks to get as many devices as close to 8TB
as possible. That should help spread load better.