On Thu, Jun 18, 2020 at 03:49:47PM +0800, Qu Wenruo wrote: > v3: > - Added two new patches > - Refactor check_can_nocow() > Since the introduction of nowait, check_can_nocow() are in fact split > into two usage patterns: check_can_nocow(nowait = false) with > btrfs_drew_write_unlock(), and single check_can_nocow(nowait = true). > Refactor them into two functions: start_nocow_check() paired with > end_nocow_check(), and single try_nocow_check(). With comment added. > > - Rebased to latest misc-next > > - Added btrfs_assert_drew_write_locked() for btrfs_end_nocow_check() > This is a little concerning one, as it's in the hot path of buffered > write. > It has percpu_counter_sum() called in that hot path, causing > obvious performance drop for CONFIG_BTRFS_DEBUG build. > Not sure if the assert is worthy since there aren't any other users. > > Qu Wenruo (3): > btrfs: add comments for check_can_nocow() and can_nocow_extent() > btrfs: refactor check_can_nocow() into two variants > btrfs: allow btrfs_truncate_block() to fallback to nocow for data > space reservation As the patch is a stable backport candidate, please reorder the series so the fix comes first and then the cleanups. The fixing patch does not need to be perfect regarding naming. Thanks.
