On Mon, Jan 25, 2016 at 5:44 PM, Justin Brown <justin.brown@xxxxxxxxxxxx> wrote: >> Does anyone suspect a kernel regression here? I wonder if its worth it >> to suggest testing the current version of all fairly recent kernels: >> 4.5.rc1, 4.4, 4.3.4, 4.2.8, 4.1.16? > > I don't have any useful information about parity RAID modes or large > arrays, so this might be totally useless. Nonetheless, just last week > I added a 2TB drive to an existing Btrfs raid10 array (5x 2TB before > addition) and did a balance afterwards. I don't take any numbers, but > I was frequently looking at htop and iotop. I thought the numbers were > extremely good: 100-120MB/s sustained for each drive with the "total" > reported by iotop exceeding 600MB/s. That's with integrated sata > controller on an Intel Z97 mini-ITX motherboard (cpu i4770). > Significantly faster than anticipated. I started it one evening, and > it was finished when I awoke the next morning. > > That was on 4.2.8-300.fc23.x86_64 with btrfs-progs 4.3.1. That's been my experience also with raid0 and 10. Because p+q computation is more expensive with raid6, it may need specifically testing with a raid6. If Christian can successfully cancel balance, umount, then reboot another kernel version and retry, it might be useful in tracking down the problem (or someone else willing to test). I'd do it but I don't have enough drive space at the moment to do it with anything other than VM and qcow2 files on a single SSD, although that should at least saturate the SSD or close to it. If so, it would still be faster than what Christian is reporting. -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
