Re: Single drive volume + second drive -> RAID1?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



James posted on Mon, 23 Jan 2012 13:17:53 -0800 as excerpted:

> On Mon, Jan 23, 2012 at 1:25 AM, Hugo Mills <hugo@xxxxxxxxxxxxx> wrote:
>>   Why not just create the filesystem as RAID-1 in the first place?
>>
>> # mkfs.btrfs -d raid1 -m raid1 /dev/sda1 /dev/sdb1
> 
> As I said, I've only got two working drives large enough at present.
> 
>>   Then you can restore from your backups. You do have backups, right?
>> (Remember, this is a filesystem still marked as "experimental").
> 
> Yes, I know. :) I just have remote backups,

First post to the list, as I'm just planning my switch to btrfs... due to 
which I just read much of the wiki and thus have it fresh in mind...

Take a look at the UseCases page on the wiki.  There's several items of 
interest there that should be very useful to you right now.  Here's the 
link altho the below actually has a bit more detail than the wiki (but 
the wiki obviously has other useful information not apropos to this 
situation):

https://btrfs.wiki.kernel.org/articles/u/s/e/UseCases_8bd8.html


First:

Creating a btrfs-raid-1 in "degraded mode", that is, with a missing 
drive, to be added later: Apparently it's not possible to do that 
directly yet, but there's a trick to work around the issue that sounds 
like just what you need ATM.

The idea is to create a "small" (the example in the wiki uses 4 GB, I 
guess I'm getting old as that doesn't seem all that small to me!!) "fake" 
device using loopback to serve as the "missing" device, create the 
filesystem specifying raid-1 for both data and metadata (-m raid1 -d 
raid1) giving mkfs both the real and loopback devices to work with, then 
delete the loopback device and remove the file backing it, so that all 
that's left is the single "real" device:

dd if=/dev/zero of=/tmp/empty bs=1 count=0 seek=4G
losetup /dev/loop1 /tmp/empty
mkfs.btrfs -m raid1 -d raid1 /dev/sda1 /dev/loop1
losetup -d /dev/loop1
rm /tmp/empty

I immediately thought of the possibility of sticking that temporary 
loopback device file on a USB thumbdrive if necessary...

You should then be able to copy everything over to the new btrfs 
operating in degraded raid-1.  After testing to ensure it's there and 
usable (bootable if that's your intention), you can blank the old drive.  
Adding it in to complete the raid-1 is then done this way:

mount /dev/sda1 /mnt
btrfs dev add /dev/sdb1 /mnt


Second:

This note might explain why you ended up with raid-0 where you thought 
you had raid-1:

(kernel 2.6.37) Simply creating the filesystem with too few devices will 
result in a RAID-0 filesystem. (This is probably a bug).


Third:

To verify that btrfs is using the raid level you expect:

On a 2.6.37 or later kernel, use

btrfs fi df /mountpoint

The required support was broken accidentally in earlier kernels, but has 
now been fixed.


NB:  As I mentioned above I'm only researching btrfs for my systems now, 
so obviously have no clue how the above suggestions work in practice.  
They're simply off the wiki.


(Now to send this via gmane.org since I'm doing the list as a newsgroup 
thru them, and get the email challenge, so I can post further replies and 
questions of my own...)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux