Re: btrfs and raid1 (restore)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The Questions are in my e-mail?!

First of all the answers to your questions:

- All data still accessible
-
cooter ~ # btrfsck /dev/disk/by-label/home1
found 242881646593 bytes used err is 0
total csum bytes: 236710380
total tree bytes: 525582336
total fs tree bytes: 219082752
btree space waste bytes: 99723364
file data blocks allocated: 284825751552
 referenced 242200633344
Btrfs v0.19-16-g075587c-dirty

- ofc i did a balance before the mkfs, the raid was running for some weeks now.


My questions again in a short version:

- Is there a way to see if the 2 devices are still running in raid1 mode? The 'btrfs
  filesystem show' doesn't fit to the old one (like described before) so it seems
  that they are now running in a non raid1 mode.
- Is there a way to get those 2 devices running in raid1 mode again without creating
  a new filesystem, adding the second device, copying data back etc.?


Regards,
Felix

On 07. September 2010 - 20:46, Jérôme Poulin wrote:
> Date: Tue, 7 Sep 2010 20:46:19 -0400
> From: Jérôme Poulin <jeromepoulin@xxxxxxxxx>
> To: Felix Blanke <felixblanke@xxxxxxxxx>
> Cc: "linux-btrfs@xxxxxxxxxxxxxxx" <linux-btrfs@xxxxxxxxxxxxxxx>
> Subject: Re: btrfs and raid1 (restore)
> 
> Now that we know the facts, what is the problem? Some simple questions first;
> Is the data still accessible?
> What did the rebalance do?
> Output of btrfsck/dmesg?
> mkfs.vfat does not write much on disk so if the rebalance was also
> made before the mkfs I guess most data is still present.
> 
> Sent from my mobile device.
> 
> On 2010-09-07, at 18:28, Felix Blanke <felixblanke@xxxxxxxxx> wrote:
> 
> > Hi,
> >
> > I made a REALLY bad mistake today.
> >
> > I've two hdds which are running at raid1 via btrfs. Today I mistyped a device-node
> > and did a "mkfs.vfat" on one of them.
> >
> > Then I simply did a "btrfs filesystem balance /path/". "btrfs filesystem show /path/"
> > now looks like this:
> >
> > Label: 'home1'  uuid: c3c38f32-f176-4479-8c44-e832ea64639f
> >    Total devices 2 FS bytes used 226.12GB
> >    devid    2 size 465.76GB used 114.38GB path /dev/loop3
> >    devid    1 size 465.76GB used 114.39GB path /dev/loop4
> >
> >
> > Before my mistake the "FS bytes used" were all three the same, now the bottom two are
> > only half of the size. Am I right that those devices didnt run in raid1 mode anymore?
> > :(
> >
> >
> > Is there a way, without copying back all data from my backup, to get those 2 running in
> > raid1 mode?
> > Or do I have to make a "mkfs.btrfs -m raid1 -d raid1 /dev/1", "btrfs device add /dev/2" and
> > "btrfs filesystem balance /path/"? :/ That would cost a lot of time.
> >
> >
> > Thanks for your help!
> >
> >
> > Regards,
> > Felix
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
---end quoted text---
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux