On Thu, 2008-09-25 at 21:01 -0400, Ric Wheeler wrote: > Chris Mason wrote: > > On Thu, 2008-09-25 at 18:58 -0400, Ric Wheeler wrote: > > > >>> I'm at 6.9 million files so far on a 500GB disk, and not surprisingly, I > >>> get 155 files/sec ;) My hope is that we're spinning around due to bad > >>> accounting on the reserved extents, and that Yan's latest patch set will > >>> fix it. > >>> > >>> -chris > >>> > >> I can update & restart my test as well. It is an odd box (8 CPUs, only > >> 1GB of DRAM and a single large 1TB s-ata drive). Hopefully useful in > >> testing out edge conditions ;-) > >> > > > > I'll push out Yan's patches tomorrow. My box here is at 17.5 million > > files and still going at 148 files/sec > > > > -chris > > > > > > > Sounds like a plan, thanks! I declined a bit down to 60 files/sec, but overnight made it up to 58 or so million files without stalling. It is possible that my metadata threshold changes caused problems for you, which might explain why my 4GB of ram lasted longer than your 1GB. I'll try to rework the thresholds. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
