> I'd use a blktrace based tool like iowatcher or seekwatcher to see > what's really happening on the performance drops. So I used this command to see if there are any outstanding requests in the I/O scheduler queue when the performance drops to 0 IOPs root@lab1:/# iostat -c -d -x -t -m /dev/sdi 1 10000 The output is: Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sdi 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 "avgqu-sz" gives the queue length (1 second avarage). So really it seems that the system is not stuck in the Block I/O layer but in upper layer instead (most likely filesystem layer). I also created ext4 filesystem on another pair of disks - so I was able to run simultaneous benchmark - one for ext4 and one for btrfs (each having 4 SSDs assigned) and when btrfs went down to 0 IOPs the ext4 fio benchmark kept generating high IOPs. I also tried to mount the system with nodatacow: /dev/sdi on /mnt/btrfs type btrfs (rw,nodatacow) It didn't help with the performance drops. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
