Re: Data recovery from a linear multi-disk btrfs file system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am Fri, 15 Jul 2016 20:45:32 +0200
schrieb Matt <langelino@xxxxxxx>:

> > On 15 Jul 2016, at 14:10, Austin S. Hemmelgarn
> > <ahferroin7@xxxxxxxxx> wrote:
> > 
> > On 2016-07-15 05:51, Matt wrote:  
> >> Hello
> >> 
> >> I glued together 6 disks in linear lvm fashion (no RAID) to obtain
> >> one large file system (see below).  One of the 6 disk failed. What
> >> is the best way to recover from this? 
> > The tool you want is `btrfs restore`.  You'll need somewhere to put
> > the files from this too of course.  That said, given that you had
> > data in raid0 mode, you're not likely to get much other than very
> > small files back out of this, and given other factors, you're not
> > likely to get what you would consider reasonable performance out of
> > this either.  
> 
> Thanks so much for pointing me towards btrfs-restore. I surely will
> give it a try.  Note that the FS is not a RAID0 but  linear (“JPOD")
> configuration. This is why  it somehow did not occur to me to try
> btrfs-restore.  The good news about in this configuration  the files
> are *not* distributed across disks. We can  read most of the files
> just fine.  The failed disk was actually smaller than the others five
> so that we should be able to recover more than 5/6 of the data,
> shouldn’t we?  My trouble is that the IO errors due to the missing
> disk  cripple the transfer speed of both rsync and dd_rescue.
> 
> > Your best bet to get a working filesystem again would be to just
> > recreate it from scratch, there's not much else that can be done
> > when you've got a raid0 profile and have lost a disk.  
> 
> This is what I plan to do if there if btrfs-restore turns out to be
> too slow and nobody on this list has any better idea.  It will,
> however, require  transferring  >15TB across the Atlantic (this is
> were the “backup” reside).  This can be tedious which is why I would
> love to avoid it.

Depending on the importance of the data it may be cheaper to transfer
the data physically on harddisks...

However, if your backup potentially includes a lot of duplicate blocks,
you may have a better experience using borgbackup to transfer the data
- it's a free, deduplicating and compressing backup tool. If your data
isn't already compressed and doesn't contain a lot of images, you may
end up with 8TB or less data to transfer. I'm using borg to compress a
300GB server down to 50-60GB backup (and this already includes 4 weeks
worth of retention). My home machine compresses down to 1.2TB from
1.8TB data with around 1 week of retention - tho I'm having a lot of
non-duplicated binary data (images, videos, games).

When backing up across a long or slow network link, you may want to work
with a local cache of the backup - and you may want to work with
deduplication. My strategy is to use borgbackup to create backups
locally, then rsync the result to the remote location.

-- 
Regards,
Kai

Replies to list-only preferred.


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux