Re: Understanding BTRFS storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2015-08-26 22:58, Duncan wrote:
Austin S Hemmelgarn posted on Wed, 26 Aug 2015 08:03:40 -0400 as
excerpted:

On 2015-08-26 07:50, Roman Mamedov wrote:
On Wed, 26 Aug 2015 10:56:03 +0200 George Duffield
<forumscollective@xxxxxxxxx> wrote:

I'm looking to switch from a 5x3TB mdadm raid5 array to a Btrfs based
solution that will involve duplicating a data store on a second
machine for backup purposes (the machine is only powered up for
backups).

What do you want to achieve by switching? As Btrfs RAID5/6 is not safe
yet, do you also plan to migrate to RAID10, losing in storage
efficiency?

Why not use Btrfs in single-device mode on top of your mdadm RAID5/6?
Can even migrate without moving any data if you currently use Ext4, as
it can be converted to Btrfs in-place.

Someone (IIRC it was Austin H) posted what I thought was an extremely
good setup, a few weeks ago.  Create two (or more) mdraid0s, and put
btrfs raid1 (or raid5/6 when it's a bit more mature, I've been
recommending waiting until 4.4 and see what the on-list reports for it
look like then) on top.  The btrfs raid on top lets you use btrfs' data
integrity features, while the mdraid0s beneath help counteract the fact
that btrfs isn't well optimized for speed yet, the way mdraid has been.
And the btrfs raid on top means all is not lost with a device going bad
in the mdraid0, as would normally be the case, since the other raid0(s),
functioning as the remaining btrfs devices, let you rebuild the missing
btrfs device, by recreating the missing raid0.

Normally, that sort of raid01 is discouraged in favor of raid10, with
raid1 at the lower level and raid0 on top, for more efficient rebuilds,
but btrfs' data integrity features change that story entirely. =:^)

Two additional things:
1. If you use MD RAID1 instead of RAID0, it's just as fast for reads, no slower than on top of single disks for writes, and get's you better data safety guarantees than even raid6 (if you do 2 MD RAID 1 devices with BTRFS raid1 on top, you can lose all but one disk and still have all your data).

2. I would be cautious of MD/DM RAID on the most recent kernels, the clustered MD code that went in recently broke a lot of things initially, and I'm not yet convinced that they have managed to glue everything back together yet (I'm still having occasional problems with RAID1 and RAID10 on LVM), so do some testing on a non-production system first.


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux