(Resending to list as plaintext (*correctly* this time))
I see. I'll probably make the backup array a raid10 then.
If/when I do see a disk failure on the raid5, are there any specific
steps it would be helpful for me to take to capture the state so you
folks can have a useful bug report?
I plan to run the latest stock kernel from the mainline kernel ppa on
Ubuntu, with btrfs-progs coming from the git.
- Tyler
On 8/20/2015 8:16 AM, Donald Pearson wrote:
Raid56 works fine until you have a drive with problems which really
means it doesn't work because you only use parity to handle the case
of a drive with problems.
Maintenance procedures such as scrubs are also a magnitude of order
slower than the other raid profiles.
I would use the raid10 profile on at least one of your pools.
On Aug 20, 2015 7:03 AM, "Austin S Hemmelgarn" <ahferroin7@xxxxxxxxx
<mailto:ahferroin7@xxxxxxxxx>> wrote:
On 2015-08-20 07:52, Austin S Hemmelgarn wrote:
On 2015-08-19 13:24, Tyler Bletsch wrote:
Thanks. I'd consider raid6, but since I'll be backing up
to a second
btrfs raid5 array, I think I have sufficient redundancy, since
equivalent to raid 5+1 on paper. I'm doing that rather
than something
like raid10 in a single box because I want the redundancy
of a second
physical server so I can failover in the event of a
system-level
component failure.
(And of course, "failover" means "continue being able to
watch TV shows
and stuff")
A question about what you said -- when you say people have
hit bugs in
the raid56 code, which flavor do these bugs tend to be?
Are they
"minding my own business and suddenly it falls over" bugs
or "I tried to
do something weird with btrfs and it screwed up" bugs?
More along the lines of 'I tried to do something that works
fine with
the other raid profiles and it kind of messed up the
filesystem'. In
general, you should be safe as long as you are using at least
Linux 4.0
and the most recent version of btrfs-progs. It's been a while
since I
saw any raid56 related bugs that caused actual data loss. If
you are
using this on SSD's though, I would wait, there are known
issues with
DISCARD/TRIM not working correctly on btrfs right now (nothing
involving
data loss, just problems with it not properly trimming free
space and
therefore causing issues with wear-leveling), and it looks
like the fix
won't be in 4.2 as of right now.
On second thought, you might want to wait until 4.3, I just saw
this thread:
http://thread.gmane.org/gmane.comp.file-systems.btrfs/47321/focus=47325
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html