Re: Crazy idea of cleanup the inode_record btrfsck things with SQL?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 04, 2014 at 02:56:55PM +0800, Qu Wenruo wrote:
> The main memory usage in btrfsck is extent record, which
> we can't free them until we read them all in and checked, so even we
> mmap/unmap, it can only help with
> the extent_buffer(which is already freed if not used according to refs).

I'm thinking aloud here, but is it *really* necessary to read everything
into memory?  Maybe a multiple-pass algorithm might be possible, e.g. one
to find free space by eliminating any areas that are occupied by extents,
then other passes to rebuild the metadata in the free space.  Or, one
pass to verify the connectivity of references and collect dangling refs,
then a second pass which fixes only the dangling refs.

Usually sequential reads are significantly faster than swapping--even
if swapping on solid-state media.  It could be that reading 260GB of
metadata sequentially two or three times is still faster than thrashing
through random lookups in 20GB of swap on a 4GB machine.

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux