15.03.2019 23:31, Hans van Kranenburg пишет: ... >> >>>> If so, shouldn't it be really balancing (spreading) the data among all >>>> the drives to use all the IOPS capacity, even when the raid5 redundancy >>>> constraint is currently satisfied? >> >> btrfs divides the disks into chunks first, then spreads the data across >> the chunks. The chunk allocation behavior spreads chunks across all the >> disks. When you are adding a disk to raid5, you have to redistribute all >> the old data across all the disks to get balanced IOPS and space usage, >> hence the full balance requirement. >> >> If you don't do a full balance, it will eventually allocate data on >> all disks, but it will run out of space on sdb, sdc, and sde first, >> and then be unable to use the remaining 2TB+ on sdd. > > Also, if you have a lot of empty space in the current allocations, btrfs > balance has the tendency to first start packing everything together > before allocating new (4 disk wide) block groups. > > This is annoying, because it can result in moving the same data multiple > times during balance (into empty space of another existing block group, > and then when that one has its turn again etc). > > So you want to get rid of empty space in existing block groups as soon > as possible. btrfs-balance-least-used can do this, (also an example from > python-btrfs), by doing them in order of most empty one first. > But if I understand the above correctly it will still attempt to move data in next most empty chunks first. Is there any way to force allocation of new chunks? Or, better, force usage of chunks with given stripe width as balance target? This thread actually made me wonder - is there any guarantee (or even tentative promise) about RAID stripe width from btrfs at all? Is it possible that RAID5 degrades to mirror by itself due to unfortunate space distribution?
