Re: Btrfs Raid5 issue.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 21, 2017 at 11:19 PM, Robert LeBlanc <robert@xxxxxxxxxxxxx> wrote:
> Chris and Qu thanks for your help. I was able to restore the data off
> the volume. I only could not read one file that I tried to rsync (a
> MySQl bin log), but it wasn't critical as I had an off-site snapshot
> from that morning and ownclould could resync the files that were
> changed anyway. This turned out much better than the md RAID failure
> that I had a year ago. Much faster recovery thanks to snapshots.
>
> Is there anything you would like from this damaged filesystem to help
> determine what went wrong and to help make btrfs better? If I don't
> hear back from you in a day, I'll destroy it so that I can add the
> disks into the new btrfs volumes to restore redundancy.
>
> Bcache wasn't providing the performance I was hoping for, so I'm
> putting the root and roots for my LXC containers on the SSDs (btrfs
> RAID1) and the bulk stuff on the three spindle drives (btrfs RAID1).
> For some reason, it seemed that the btrfs RAID5 setup required one of
> the drives, but I thought I had data with RAID5 and metadata with 2
> copies. Was I missing something else that prevented mounting with that
> specific drive? I don't want to get into a situation where one drive
> dies and I can't get to any data.

With all three connected, what do you get for 'btrfs fi show' ?

The first email says the supers on all three drives are OK, but still
it's confusing the degraded is working. It suggests it's not finding
something on one of the drives that it needs to mount - usually that's
the first superblock or it could be the system block group is partly
corrupt or read error or something; and when degraded it makes it
possible to mount.

Anyway at least all of the data is safe now. Pretty much all you can
do to guard against data loss is backups. Any degraded state is
precarious because it requires just one more thing to go wrong and
it's all bad news from there.

Gluster is pretty easy to setup, and use either gluster native mount
on linux or smb with everything else. Stick a big drive in a raspberry
pi (or two) and even though it's only fast ethernet (haha, now slow
100bps ethernet) it will still replicate automatically as well as
failover. Plus one of those could be XFS if you wanted to hedge your
bets. Or one of the less expensive Intel NUCs will also work if you
want to stick with x86.



-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux