Re: Big disk space usage difference, even after defrag, on identical data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Gian-Carlo Pascutto posted on Mon, 13 Apr 2015 16:06:39 +0200 as
excerpted:

>> Defrag should force the rewrite of entire files and take care of this,
>> but obviously it's not returning to "clean" state.  I forgot what the
>> default minimum file size is if -t isn't set, maybe 128 MiB?  But a -t1
>> will force it to defrag even small files, and I recall at least one
>> thread here where the poster said it made all the difference for him,
>> so try that.  And the -f should force a filesystem sync afterward, so
>> you know the numbers from any report you run afterward match the final
>> state.
> 
> Reading the corresponding manual, the -t explanation says that "any
> extent bigger than this size will be considered already defragged". So I
> guess setting -t1 might've fixed the problem too...but after checking
> the source, I'm not so sure.

Oops!  You are correct.  There was an on-list discussion of that before 
that I had forgotten.  The "make sure everything gets defragged" magic 
setting is -t 1G or higher, *not* the -t 1 I was trying to tell you 
previously (which will end up skipping everything, instead of defragging 
everything).

Thanks for spotting the inconsistency and calling me on it! =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux