On 2015-11-14 09:11, CHENG Yuk-Pong, Daniel wrote:
Hi List,
I have read the Gotcha[1] page:
Files with a lot of random writes can become heavily fragmented
(10000+ extents) causing trashing on HDDs and excessive multi-second
spikes of CPU load on systems with an SSD or **large amount a RAM**.
Why could large amount of memory worsen the problem?
If **too much** memory is a problem, is it possible to limit the
memory btrfs use?
As Duncan already replied, your issue is probably with the kernel's
ancient defaults for write-back buffering. It defaults to waiting for
10% of system RAM to be pages that need written to disk before forcing
anything to be flushed. This worked fine when you had systems where
256M was a lot of RAM, but is absolutely inane when you get above about
4G (the actual point at which it becomes a problem is highly dependent
on your storage hardware however). I find that on most single disk
systems with a fast disk, you start to get slowdowns when trying to
cache more than about 256M for writeback.
This is a known limitation, although NOCOW is still something that should be used for database files. The trick to get it set on an existing file is to create a new, empty file, set the attribute on that, then copy the existing file into the new one, then rename the new one over the old one.Background info: I am running a heavy-write database server with 96GB ram. In the worse case it cause multi minutes of high cpu loads. Systemd keeping kill and restarting services, and old job don't die because they stuck in uninterruptable wait... etc. Tried with nodatacow, but it seems only affect new file. It is not an subvolume option either...
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
