[ ... ] > The issue isn't total size, it's the difference between total > size and the amount of data you want to store on it. and how > well you manage chunk usage. If you're balancing regularly to > compact chunks that are less than 50% full, [ ... ] BTRFS on > 16GB disk images before with absolutely zero issues, and have > a handful of fairly active 8GB BTRFS volumes [ ... ] Unfortunately balance operations are quite expensive, especially from inside VMs. On the other hand if the system is not much disk constrained relatively frequent balances is a good idea indeed. It is a bit like the advice in the other thread on OLTP to run frequent data defrags, which are also quite expensive. Both combined are like running the compactor/cleaner on log structured (another variants of "COW") filesystems like NILFS2: running that frequently means tighter space use and better locality, but is quite expensive too. >> [ ... ] My impression is that the Btrfs design trades space >> for performance and reliability. > In general, yes, but a more accurate statement would be that > it offers a trade-off between space and convenience. [ ... ] It is not quite "convenience", it is overhead: whole-volume operations like compacting, defragmenting (or fscking) tend to cost significantly in IOPS and also in transfer rate, and on flash SSDs they also consume lifetime. Therefore personally I prefer to have quite a bit of unused space in Btrfs or NILFS2, at a minimum around double at 10-20% than the 5-10% that I think is the minimum advisable with conventional designs. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
