There are a lot of root owner check in btrfs_truncate_inode_items()
like:
if (test_bit(BTRFS_ROOT_SHAREABLE, &root->state) ||
root == fs_info->tree_root)
But considering that, there are only those trees can have INODE_ITEMs:
- tree root (For v1 space cache)
- subvolume trees
- tree reloc trees
- data reloc tree
- log trees
And since subvolume/tree reloc/data reloc trees all have SHAREABLE bit,
and we're checking tree root manually, so above check is just excluding
log trees.
This patch will replace two of such checks to a much simpler one:
if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID)
This would merge btrfs_drop_extent_cache() and lock_extent_bits() call
into the same if branch.
Also since we're here, add one comment explaining why we don't want to
call lock_extent_bits()/drop_extent_cache() on log trees.
Finally replace ALIGN()/ALIGN_DOWN() to round_up()/round_down(), as I'm
always bad at determing the alignement direction.
Signed-off-by: Qu Wenruo <wqu@xxxxxxxx>
---
fs/btrfs/inode.c | 27 ++++++++++++++-------------
1 file changed, 14 insertions(+), 13 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index a6c26c10ffc5..771af55038bf 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -4101,7 +4101,7 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans,
u64 bytes_deleted = 0;
bool be_nice = false;
bool should_throttle = false;
- const u64 lock_start = ALIGN_DOWN(new_size, fs_info->sectorsize);
+ const u64 lock_start = round_down(new_size, fs_info->sectorsize);
struct extent_state *cached_state = NULL;
BUG_ON(new_size > 0 && min_type != BTRFS_EXTENT_DATA_KEY);
@@ -4121,20 +4121,22 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans,
return -ENOMEM;
path->reada = READA_BACK;
- if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID)
- lock_extent_bits(&BTRFS_I(inode)->io_tree, lock_start, (u64)-1,
- &cached_state);
-
/*
- * We want to drop from the next block forward in case this new size is
- * not block aligned since we will be keeping the last block of the
- * extent just the way it is.
+ * There will be a lot of exceptions for log trees, as log inodes are
+ * not real inodes, but an anchor for logged inodes.
*/
- if (test_bit(BTRFS_ROOT_SHAREABLE, &root->state) ||
- root == fs_info->tree_root)
- btrfs_drop_extent_cache(BTRFS_I(inode), ALIGN(new_size,
+ if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID) {
+ /*
+ * We want to drop from the next block forward in case this
+ * new size is not block aligned since we will be keeping the
+ * last block of the extent just the way it is.
+ */
+ lock_extent_bits(&BTRFS_I(inode)->io_tree, lock_start, (u64)-1,
+ &cached_state);
+ btrfs_drop_extent_cache(BTRFS_I(inode), round_up(new_size,
fs_info->sectorsize),
(u64)-1, 0);
+ }
/*
* This function is also used to drop the items in the log tree before
@@ -4335,8 +4337,7 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans,
should_throttle = false;
if (found_extent &&
- (test_bit(BTRFS_ROOT_SHAREABLE, &root->state) ||
- root == fs_info->tree_root)) {
+ root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID) {
struct btrfs_ref ref = { 0 };
bytes_deleted += extent_num_bytes;
--
2.26.2