On Jan 30, 2014, at 3:18 PM, Duncan <1i5t5.duncan@xxxxxxx> wrote: > IOW, I guess I don't agree with that patch as it was apparently > committed. There needs to be a force option as well. -o degraded is the force option. I think the problem here is that there's sufficient damage to the one remaining device, that it cannot be mounted rw. It's sort of a chicken - egg problem, the single device available has a file system that's sufficiently damaged that it undamaged metadata to get a rw mount. Since there are too few devices for that, it fails to mount rw. I'm not seeing it as any different from a single device volume with data/metadata profile single, with sufficient damage to cause it to not mount rw. If -o recovery can't fix it, I think it's done for. So something Johan sent to me but didn't make the list (I've asked him to repost) is that his attempts to mount degraded,recovery,skip_balance, result in a: /dev/mapper/bunkerA /mnt mount: wrong fs type, bad option, bad superblock He gets other errors also, and he has the results of btrfs check that might be more revealing, but to say the least it looks pretty nasty. Another question for Johann is what exact balance command was to go back to single? Was there -dconvert and -mconvert? Both are required to go from raid1/raid to single/DUP or single/single, and actually get rid of the 2nd device. And looking at the man page, I'm not sure how we do that conversion and specify which multiple device we're dropping: [filesystem] balance start [options] <path> With a missing device, presumably this is obvious, but… Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
