Re: replacing failed disks in RAID-1 (kernel BUG)?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



David Woodhouse wrote:
On Mon, 2009-07-13 at 12:28 +0200, Tomasz Chmielewski wrote:
How do I replace failed disks in RAID-1 mode?

I don't think you can. In theory you can remove the broken one, and you
can add a _new_ empty one -- I say 'in theory' because you seem to have
demonstrated both of those actions failing.

But I don't believe we have yet implemented anything to let you
_replace_ a failed disk and recreate its original contents.

I had that on my TODO list for some time after I get the basic RAID[56]
operation working.

It would be also interesting to have a tool to monitor the state of the RAID (i.e. similar to what /proc/mdstat provides for md).


I also tried to compare what happens when we do writes to md-raid and to btrfs-raid (RAID-1 in both cases) and it looks... strange for btrfs. Or perhaps this is how RAID-1 works in btrfs?

I used iostat to monitor the writes on both devices.


With md RAID-1, when we do:

# dd if=/dev/zero of=/mnt/md-raid-1/testfile

and

# iostat -dk 1


We can see the write speed on both devices is more or less the same.


With btrfs RAID-1, when we do the same, I can see that writes go to one drive, while the second drive receives 0 kb/s writes; then it changes (one drive is written to, the second isn't). Only sometimes, writes happen concurrently to both drives, like with md RAID-1.

Is it intended?


--
Tomasz Chmielewski
http://wpkg.org
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux