On Mon, Apr 6, 2015 at 1:08 PM, Martin <develop@xxxxxxxxxx> wrote: > Am Montag, 6. April 2015, 09:45:10 schrieb Chris Murphy: >> On Mon, Apr 6, 2015 at 5:40 AM, Martin <develop@xxxxxxxxxx> wrote: >> > Hello! >> > >> > I have to recover a corrupt btrfs. The size is approx 4.5 TB. The fs >> > became >> > corrupt by failure of a hardware-raid. >> >> What raid level? What kind of failure? What is the current raid >> status? What was the mkfs.btrfs command used to create the file system >> OR what is the current profile used for data and metadata? > > Hello Chris, > > it was a hardware-RAID (3ware-Controller), RAID-5. There was a failure of 2 of > 6 disks. Because only one disk was physically damaged, I could "dd" the RAID > to a new big disk with the help of 3Ware/Avago. > > The stack was: hardware-raid --- Linux LVM --- btrfs. The metadata-profile was > the default for a single drive, so it should be "dup". Small files are stored inline with metadata, so possibly those have survived because of this. Data profile is single and the corruption from the raid failure, or raid failure to properly rebuild, so there's no way for Btrfs to recover if the single copy is bad. I'd look at the raid recovery somehow being flawed. As far as getting data off this Btrfs volume, it sounds like a job for btrfs restore. -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
