Re: counting fragments takes more time than defragmenting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24 June 2015 at 12:46, Duncan <1i5t5.duncan@xxxxxxx> wrote:
> Patrik Lundquist posted on Wed, 24 Jun 2015 10:28:09 +0200 as excerpted:
>
> AFAIK, it's set huge to defrag everything,

It's set to 256K by default.


> Assuming "set a huge -t to defrag to the maximum extent possible" is
> correct, that means -t 1G should be exactly as effective as -t 1T...

1G is actually more effective because 1T overflows the uint32
extent_thresh field, so 1T, 0, and 256K are currently the same.

3G is the largest value that works with -t as expected (disregarding
the man page) and is easy to type.


> But btrfs or ext4, 31 extents ideal or a single extent ideal, 150 extents
> still indicates at least some remaining fragmentation.

I gave it another shot but I've now got 154 extents instead. :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux