On Mon, Jan 11, 2016 at 3:10 PM, Hugo Mills <hugo@xxxxxxxxxxxxx> wrote: > On Mon, Jan 11, 2016 at 02:31:41PM -0700, Chris Murphy wrote: >> On Mon, Jan 11, 2016 at 2:03 AM, Hugo Mills <hugo@xxxxxxxxxxxxx> wrote: >> > On Sun, Jan 10, 2016 at 05:13:28PM -0700, Chris Murphy wrote: >> >> On Sat, Jan 9, 2016 at 2:04 PM, Hugo Mills <hugo@xxxxxxxxxxxxx> wrote: >> >> > On Sat, Jan 09, 2016 at 09:59:29PM +0100, cheater00 . wrote: >> >> >> OK. How do we track down that bug and get it fixed? >> >> > >> >> > I have no idea. I'm not a btrfs dev, I'm afraid. >> >> > >> >> > It's been around for a number of years. None of the devs has, I >> >> > think, had the time to look at it. When Josef was still (publicly) >> >> > active, he had it second on his list of bugs to look at for many >> >> > months -- but it always got trumped by some new bug that could cause >> >> > data loss. >> >> >> >> >> >> Interesting. I did not know of this bug. It's pretty rare. >> > >> > Not really. It shows up maybe on average once a week on IRC. It >> > gets reported much less on the mailing list. >> >> Is there a pattern? Does it only happen at a 2TiB threshold? > > No, and no. > > There is, as far as I can tell from some years of seeing reports of > this bug, no correlation with RAID level, hardware, OS, kernel > version, FS size, usage of the FS at failure, or allocation level of > either data or metadata at failure. > > I haven't tried correlating with the phase of the moon or the > losses on Lloyds Register yet. Huh. So it's goofy cakes. This is specifically where btrfs_free_extent produces errno -28 no space left, and then the fs goes read-only? -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
