Hello,
I have used btrfs and zfs for some years and feel pretty confident about
their administration - and both with ther snaps and subvols saved me
quite often.
I had to grow my 4x250GB Raid10-Backup-Array to a 6x500GB raid10-backup
array - the slower half of 4 1TB 2.5" Spinpoint M8's. were to be
enhanced with the slowest quarter of 2 2TB 2.5" spinpoint M9T's.
During balance or copies, the second image of the stripeset A + B | A' +
B' is never used, thus throwing away about 40% of performance, e.g. it
NEVER used A' + B' to read from even if 50% of the needed assembled data
could have been read from there..., so 2 disks were maxed out, the other
writing at about 40% their I/O capacity.
Also when rsyncing to ssd raid0 zpool (just for testing, the ssd-pool is
the working pool, the zfs and btrfs disk pools are for backup) - only 3
disks of 6 are read from.
As opposed, a properly set up mdadm "far or offset" + xfs and zfs itself
use all spindles (devices) to read from and net data is delivered twice
as fast.
I would love to see btrfs trying harder to deliver data - it slips my
mind whether it is a missing feature in btrfs raid10 right now or a bug
in the 3.16 lines of kernel I am using (mint rebecca on my workstation).
If anybody knows about it, or I am missing something (-m=raid10
-d=raid10 was OK I hope when rebalancing?)
I'd like to be enlightened (when I googled it was always stated that
btrfs would read from all spindles, but it's not the case for me...)
Sven.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html