On Wed, Aug 15, 2012 at 11:29:11AM -0600, Mitch Harder wrote: > On Tue, Aug 14, 2012 at 3:22 PM, Josef Bacik <jbacik@xxxxxxxxxxxx> wrote: > > Swinging this pendulum back the other way. We've been allocating chunks up > > to 2% of the disk no matter how much we actually have allocated. So instead > > fix this calculation to only allocate chunks if we have more than 80% of the > > space available allocated. Please test this as it will likely cause all > > sorts of ENOSPC problems to pop up suddenly. Thanks, > > > > Signed-off-by: Josef Bacik <jbacik@xxxxxxxxxxxx> > > I've been testing this patch with my multiple rsync test (On a 3.5.1 > kernel merged with for-linus). > > I tested without compression, and with lzo compression, and I haven't > run into any ENOSPC issues. I still have ENOSPC issues with zlib, > with or without this patch. > > I made a series of runs with and without this patch (on an > uncompressed, newly formatted partition), and some of the results were > not what I anticipated. > > 1) I found that *MORE* metadata space was being allocated with this > patch than when using an unpatched baseline kernel. The total > allocated space was exactly the same in each run (I saw a slight > variation in the amount of used Metadata). > > On the unpatched baseline kernel, at the end of the run, the 'btrfs fi > df' command would show: > > # btrfs fi df /mnt/benchmark/ > Data: total=10.01GB, used=6.99GB > System: total=4.00MB, used=4.00KB > Metadata: total=776.00MB, used=481.38MB > > With this patch applied, the 'btrfs fi df' command would show: > > # btrfs fi df /mnt/benchmark/ > Data: total=10.01GB, used=6.99GB > System: total=4.00MB, used=4.00KB > Metadata: total=1.01GB, used=480.94MB > > > 2) The multiple rsync's would run significantly faster with the patched kernel. > > Unpatched baseline kernel: Time to run 7 rysncs: 348.3 sec (+/- 9.7 sec) > Patched kernel: Time to run 7 rsyncs: 316.6 sec (+/- 6.5 sec) > > Perhaps the extra allocated metadata space made things run better, or > perhaps something else was going on. Well that's odd, I wonder if we're doing the limited dance more often. Once I've finished my fsync work I'll come back to this. I know for sure in my tests it's allocating chunks way too often, so I imagine your test is just tickling a different aspect of the chunk allocator. Thanks, Josef -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
