Re: btrfs-transaction blocked for more than 120 seconds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Kai Krakow posted on Fri, 03 Jan 2014 02:24:01 +0100 as excerpted:

> Duncan <1i5t5.duncan@xxxxxxx> schrieb:
> 
>> But because a full balance rewrites everything anyway, it'll
>> effectively defrag too.
> 
> Is that really true? I thought it just rewrites each distinct extent and
> shuffels chunks around... This would mean it does not merge extents
> together.

While I'm not a coder and they're free to correct me if I'm wrong...

With a full balance (there are now options allowing one to do only data, 
or only metadata, or for that matter only system, and do other filtering, 
say to rebalance only chunks less than 10% used or only those not yet 
converted to a new raid level, if desired, but we're talking a full 
balance here), all chunks are rewritten, merging data (or metadata) into 
fewer chunks if possible, eliminating the then unused chunks and 
returning the space they took to the unallocated pool.

Given that everything is being rewritten anyway, a process that can take 
hours or even days on multi-terabyte spinning rust filesystems, /not/ 
doing a file defrag as part of the process would be stupid.

So doing a separate defrag and balance isn't necessary.  And while we're 
at it, doing a separate scrub and balance isn't necessary, for the same 
reason.  (If one copy of the data is invalid and there's another, it'll 
be used for the rewrite and redup if necessary during the balance and the 
invalid copy will simply be erased.  If there's no valid copy, then there 
will be balance errors and I believe the chunks containing the bad data 
are simply not rewritten at all, tho the valid data from them might be 
rewritten, leaving only the bad data (I'm not sure which, on that), thus 
allowing the admin to try other tools to clean up or recover from the 
damage as necessary.)

That's one reason why the balance operation can take so much longer than 
a straight sequential read/write of the data might indicate, because it's 
doing all that extra work behind the scenes as well.

Tho I'm not sure that it defrags across chunks, particularly if a file's 
fragments reach across enough chunks that they'd not have been processed 
by the time a written chunk is full and the balance progresses to the 
next one.  However, given that data chunks are 1 GiB in size, that should 
still cut down a multi-thousand-extent file to perhaps a few dozen 
extents, one each per rewritten chunk.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux