> On 05 Feb 2015, at 13:47, Austin S Hemmelgarn <ahferroin7@xxxxxxxxx> wrote: > > I've actually seen similar behavior without the virtualization when doing large filesystem intensive operations with compression enabled. > I don't know if this is significant, but it seems to be worse with lzo compression than zlib, and also seems to be worse when compression is enabled at the filesystem level instead of through 'chattr +c’. Zlib isn’t that performant compared to lzo. So zlib creates a bottleneck at the CPU and thereby limits the IO the volume is exposed to. So our problem might be related to intensive operations on the volume. > I'm not certain, but I think it might have something to do with the somewhat brain-dead default parameters in the default I/O scheduler (the so-called 'completely fair queue', which as I've said before was obviously named by a mathematician and not based on it's actual behavior), although it seems to be much worse when using the Deadline and no-op I/O schedulers. Good idea. I had a look to my configuration of the “stack” for the block devices and their queuing and caching. My setup looks like this (with default settings - I made no adjustments): * 2 HDDs * Hewlett-Packard Company Smart Array Gen8 Controllers (rev 01) [With 1GB write cache. Other black magic seems to be included. Combines both HDDs to a RAID1] * Block device driver * IO Scheduler: deadline * LVM * QEMU [With writeback cache. Should I change it to “none"? The storage controller has write cache included.] * virtio-blk * btrfs As you can see, only one IO scheduler is involved. The VM by default seems not to use any IO schedulers. I checked this by executing “cat /sys/block/vd*/queue/scheduler” on the VM and it reported “none”. Regards, Juergen -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
