Austin S. Hemmelgarn posted on Tue, 12 Sep 2017 13:27:00 -0400 as excerpted: > The tricky part though is that differing workloads are impacted > differently by fragmentation. Using just four generic examples: > > * Mostly sequential write focused workloads (like security recording > systems) tend to be impacted by free space fragmentation more than data > fragmentation. Balancing filesystems used for such workloads is likely > to give a noticeable improvement, but defragmenting probably won't give > much. > * Mostly sequential read focused workloads (like a streaming media > server) > tend to be the most impacted by data fragmentation, but aren't generally > impacted by free space fragmentation. As a result, defrag will help > here a lot, but balance won't as much. > * Mostly random write focused workloads (like most database systems or > virtual machines) are often impacted by both free space and data > fragmentation, and are a pathological case for CoW filesystems. Balance > and defrag will help here, but they won't help for long. > * Mostly random read focused workloads (like most non-multimedia desktop > usage) are not impacted much by either aspect, but if you're on a > traditional hard drive they can be impacted significantly by how the > data is spread across the disk. Balance can help here, but only because > it improves data locality, not because it compacts free space. This is a very useful analysis, particularly given the examples. Maybe put it on the wiki under the defrag discussion? (Assuming something like it isn't already there. I've not looked in awhile.) -- Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
