2017-10-10 19:22 GMT+03:00 David Sterba <dsterba@xxxxxxx>:
> On Tue, Oct 03, 2017 at 06:06:02PM +0300, Timofey Titovets wrote:
>> We need to call extent_range_clear_dirty_for_io()
>> on compression range to prevent application from changing
>> page content, while pages compressing.
>>
>> but "(end - start)" can be much (up to 1024 times) bigger
>> then compression range (BTRFS_MAX_UNCOMPRESSED), so optimize that
>> by calculating compression range for
>> that loop iteration, and flip bits only on that range
>
> I'm not sure this is safe to do. Compress_file_range gets the whole
> range [start,end] from async_cow_start, and other parts of code might
> rely on the status of the whole range, ie. not partially redirtied.
I check some kernel code, io path are complex =\.
I see 3 approaches:
1. That used to signal upper level, that we fail to write that pages
and on other sync() call, kernel can send that data again to writing.
2. We lock pages from any changes while processing data.
3. Serialize writes, i.e. if i understood correctly, that allow to
not send down pages again for several sync requests.
My code above will handle ok first and second case, and in theory will
cause some issues with 3, but doesn't matter.
The main design problem from my point of view for now, that we call
that function many times in the loop, example:
compress_file_range get: 0 - 1048576
extent_range_clear_dirty_for_io(), will get a call for:
0 - 1048576
131072 - 1048576
262144 - 1048576
... & etc
So, we can beat that in different way.
I first think about move extent_range_clear_dirty_for_io() out of
loop, but i think that a better approach.
Whats about:
if (!redirty) {
extent_range_clear_dirty_for_io(inode, start, end);
redirty = 1;
}
That will also cover all above cases, because that lock whole range, as before.
But we only call that once, and that will fix design issue.
That you think?
Thanks.
--
Have a nice day,
Timofey.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html