On 09/28/2009 05:39 AM, Tobias Oetiker wrote:
Hi Daniel,
Today Daniel J Blueman wrote:
On Mon, Sep 28, 2009 at 9:17 AM, Florian Weimer<fweimer@xxxxxx> wrote:
* Tobias Oetiker:
Running this on a single disk, I get the quite acceptable results.
When running on-top of a Areca HW Raid6 (lvm partitioned)
then both read and write performance go down by at least 2
magnitudes.
Does the HW RAID use write caching (preferably battery-backed)?
I believe Areca controllers have an option for writeback or
writethrough caching, so it's worth checking this and that you're
running the current firmware, in case of errata. Ironically, disabling
writeback will give the OS tighter control of request latency, but
throughput may drop a lot. I still can't help thinking that this is
down to the behaviour of the controller, due to the 1-disk case
working well.
it certainly is down to a behaviour of the controller, or the
results would be the same as with a single sata disk :-) It would
be interesting to see what results others get on HW Raid
Controllers ...
One way would be to configure the array as 6 or 7 devices, and allow
BTRFS/DM to mange the array, then see if performance under write load
is better, and with or without writeback caching...
I can imagine that this would help, but since btrfs aims to be
multipurpose, this does not realy help all that much since this
fundamentally alters the 'conditions' at the moment the RAID
contains different filesystem and is partitioned using lvm ...
cheers
tobi
the results for ext3 fs look like this ...
I would be more suspicious of the barrier/flushes being issued. If your
write cache is non-volatile, we really do not want to send them down to
this type of device. Flushing this type of cache could certainly be
very, very expensive and slow.
Try "mount -o nobarrier" and see if your performance (write cache still
enabled on the controller) is back to expected levels,
Ric
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html