On 25 June 2015 at 06:01, Duncan <1i5t5.duncan@xxxxxxx> wrote: > > Patrik Lundquist posted on Wed, 24 Jun 2015 14:05:57 +0200 as excerpted: > > > On 24 June 2015 at 12:46, Duncan <1i5t5.duncan@xxxxxxx> wrote: > > If it's uint32 limited, either kill everything above that in both the > documentation and code, or alias everything above that to 3G (your next > paragraph) or whatever. My simple overflow patch yesterday fixes the problem, so 4G or larger is max instead of 0. > >> But btrfs or ext4, 31 extents ideal or a single extent ideal, 150 > >> extents still indicates at least some remaining fragmentation. > > > > I gave it another shot but I've now got 154 extents instead. :-) > > Is it possible there's simply no gig-size free-space holes in the > filesystem allocation, so it simply /can't/ defrag further than that, > because there's no place to allocate whole-gig data chunks at a time? I would guess so, without allocating new chunks. Defrag can probably be smarter and avoid rewriting extents if it means splitting them (unless the compression flag is set and it must rewrite everything). -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
