Re: Defragmenting to recover wasted space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2019-11-07 9:03 a.m., Nate Eldredge wrote:

> 1. What causes this?  I saw some references to "unused extents" but it
> wasn't clear how that happens, or why they wouldn't be freed through
> normal operation.  Are there certain usage patterns that exacerbate it?

Virtual Box Image files are subject to many, many small writes... (just
booting windows, for example, can create well over 5000 file fragments.)
 When the image file is new, the extents will be very large.  In BTRFS,
the extents are immutable. When a small write creates a new 4K COW
extent, the old 4k remains as part of the old extent as well.  This
situation will remain until all the data in the old extent is
re-written.. when none of that data is referenced anymore, the extent
will be freed.

> 5. Is there a better way to detect this kind of wastage, to distinguish
> it from more mundane causes (deleted files still open, etc) and see how
> much space could be recovered? In particular, is there a way to tell
> which files are most affected, so that I can just defragment those?

Generally speaking, files that are subject to many random writes are
few, and you should be well aware of the larger ones where this might be
an issues,, (virtual image files, large databases, etc.)  These files
should be defragmented frequently.  I don't see any reason not run
defrag over the whole subvolume, but if you want to search for files
with absurd fragments, you can always use the find command to search for
files, run the filefrag command on them, then use whatever tools you
like to search the output for files with thousands of fragments.




Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux