Am Mittwoch, 6. August 2014, 11:29:19 schrieb Hugo Mills:
> On Wed, Aug 06, 2014 at 12:21:59PM +0200, Martin Steigerwald wrote:
> > It basically happened on about the first heavy write I/O occasion after
> > the BTRFS trees filled the complete device:
> >
> > I am now balancing the trees down to lower sizes manually with
> >
> > btrfs balance start -dusage=10 /home
> >
> > btrfs balance start -musage=10 /home
>
> Note that balance has nothing to do with balancing the metadata
> trees. The tree structures are automatically balanced as part of their
> normal operation. A "btrfs balance start" is a much higher-level
> operation. It's called balance because the overall effect is to
> balance the data usage evenly across multiple devices. (Actually, to
> balance the available space evenly).
>
> Also note that the data part isn't tree-structured, so referring to
> "balancing the trees" with a -d flag is doubly misleading. :)
Hmm, it makes used size in
merkaba:~> btrfs fi sh /home
Label: 'home' uuid: […]
Total devices 2 FS bytes used 129.12GiB
devid 1 size 160.00GiB used 142.03GiB path /dev/dm-0
devid 2 size 160.00GiB used 142.03GiB path /dev/mapper/sata-home
and I thought this the is size used by the trees BTRFS creates.
So you say it does not balance shortest versus longest path but… as the tree
algorithm does this automatically… but just the *data* in the tree?
In any way: I should not be required to do this kind of manual maintenance in
order to prevent BTRFS from locking up hard on write accesses.
Ciao,
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7Attachment:
signature.asc
Description: This is a digitally signed message part.
