On Mon, 2015-08-17 at 16:44 -0300, Eduardo Bach wrote: > Based on previous testing with a smaller number of disk I'm > suspecting > that the 32 disks are not all being used. With 12 discs I got more > speed with btrfs thanmdadm+xfs. With, btrfs, 12 disks and large files > we got the entire theoretical speed, 12 x 200MB/s per disk. My hope > was to get some light from you guys to debug the problem so the btrfs > use the 32 discs (assuming this is the problem). Perhaps the debug > this problem may be of interest to devs? >From the sounds of this, you must be hitting some bottleneck in the btrfs code. One thing I'm actually curious about: How is the CPU usage during these tests? Btrfs can more work on the CPU than mdadm+xfs - in particular, data che cksums are enabled by default. If you have compression enabled, that would obviously be a major hit as well. Make sure you don't have compression enabled (it's off by default, or you can use the mount option "compress=no"). You could try with the "nodatasum" option to see if checksums make a difference. It could be possible that you're saturating the CPU, and that's why you're not seeing any additional gains over 3.5GB/s. Taking a look at top output while the test is running might be informative. On the other hand, if the CPU isn't saturated and the disk io isn't saturated, then it's probably a scaling issue in btrfs, possibly something like lock contention. -- Calvin Walton <calvin.walton@xxxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
