Saran Neti posted on Thu, 01 May 2014 00:48:22 -0400 as excerpted: > I had 3 x 3 TB drives [...] Then one of the drives got busted. > Mounting the fs in degraded mode and adding a new fresh drive to > rebuild raid1, generated several "...blocked > for more than 120 seconds." messages. > Described in > https://www.mail-archive.com/linux-btrfs@xxxxxxxxxxxxxxx/msg30017.html > are two possible causes, fragmentation due to COW and hardlinks, both of > which I think are unlikely in this case. I can mount in degraded mode > and read files, but that's about it. Is there something I'm missing? Any > debugging tips would be appreciated. Just a btrfs user and list regular here, not a dev, but... You're to be commended for all that useful information you posted. Way more helpful than most manage in their first round. =:^) But it's enough to see I can't be of much help but for the below, so it's mostly snipped here as unnecessary for this reply... I've several times seen the devs request a magic-sysrq-w dump for cases like this. That should be alt-srq-w on x86 hardware, or echo w > /proc/sysrq-trigger (should work in a VM also). That dumps IO-blocked tasks, letting the devs see where things are screwing up. (If magic-srq is new to you, there's more about it in $KERNDIR/Documentation/sysrq.txt. Last I looked a google returned some pretty good hits discussing it, too.) -- Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
