Hello Donald, thanks for your reply. I appreciate your help. > I would use recover to get the data if at all possible, then you can
experiment with try to fix the degraded condition live. If you have any chance of getting data from the pool, you reduce that chance every time you make a change.
Ok, you assume that btrfs recover is the most likely way of recovering data. But if mounting degraded, scrubbing, btrfsck, ... are more successful, your proposal is more risky, isn't it? With a dd-image I can always go back to todays status.
If btrfs did the balance like you said, it wouldn't be raid5. What you just described is raid4 where only one drive holds parity data. I can't say that I actually know for a fact that btrfs doesn't do this, but I'd be shocked and some dev would need to eat their underware if the balance job didn't distribute the parity also.
Ok, I was not aware of the difference between raid4&5. So, I did try a btrs-recover: warning devid 3 not found already Check tree block failed, want=8300102483968, have=65536 Check tree block failed, want=8300102483968, have=65536 Check tree block failed, want=8300102483968, have=65536 read block failed check_tree_block Couldn't setup extent tree [it is still running] btrfs-find-root gives me: http://paste.ubuntu.com/11844005/ http://paste.ubuntu.com/11844009/ (on the two disks) btrfs-show-super: http://paste.ubuntu.com/11844016/ Greetings, Hendrik --- Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft. https://www.avast.com/antivirus -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
