Hugo Mills posted on Sat, 14 Nov 2015 14:31:12 +0000 as excerpted: >> I have read the Gotcha[1] page: >> >> Files with a lot of random writes can become heavily fragmented >> (10000+ extents) causing trashing on HDDs and excessive multi-second >> spikes of CPU load on systems with an SSD or **large amount a RAM**. >> >> Why could large amount of memory worsen the problem? > > Because the kernel will hang on to lots of changes in RAM for > longer. With less memory, there's more pressure to write out dirty pages > to disk, so the changes get written out in smaller pieces more often. > With more memory, the changes being written out get "lumpier". > >> If **too much** memory is a problem, is it possible to limit the memory >> btrfs use? > > There's some VM knobs you can twiddle, I believe, but I haven't > really played with them myself -- I'm sure there's more knowledgable > people around here who can suggest suitable things to play with. Yes. Don't have time to explain now, but I will later, if nobody beats me to it. -- Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
