On 13.01.20 г. 19:19 ч., David Sterba wrote: > On Mon, Jan 13, 2020 at 12:41:45PM +0800, Qu Wenruo wrote: >> On 2020/1/10 上午8:58, Qu Wenruo wrote: >>> On 2020/1/10 上午8:21, Qu Wenruo wrote: >>>> On 2020/1/9 下午10:37, David Sterba wrote: >>>>> On Thu, Jan 09, 2020 at 01:54:34PM +0800, Qu Wenruo wrote: >>>>> We use smp_mb() because this serializes memory among multipe CPUs, when >>>>> one changes memory but stores it to some temporary structures, while >>>>> other CPUs don't see the effects. I'm sure you've read about that in the >>>>> memory barrier docs. >>> >>> I guess the main difference between us is the effect of "per-cpu >>> viewable temporary value". >>> >>> It looks like your point is, without rmb() we can't see consistent >>> values the writer sees. >>> >>> But my point is, even we can only see a temporary value, the >>> __before_atomic() mb at the writer side, ensures only 3 possible >>> temporary values combination can be seen. >>> (PTR, DEAD), (NULL, DEAD), (NULL, 0). >>> >>> The killed (PTR, 0) combination is killed by that writer side mb. >>> Thus no need for the reader side mb before test_bit(). >>> >>> That's why I insist on the "test_bit() can happen whenever they like" >>> point, as that has the same effect as schedule. >> >> Can we push the fix to upstream? I hope it to be fixed in late rc of v5.5. > > Yes the plan is to push it to 5.5-rc so we can get the stable backports. > > About the barriers, we seem to have a conclusion to use smp_rmb/smp_wmb > and not the smp_mb__before/after_atomic. Zygo also tested the patch and > reported it's ok so I don't want to hold it back. > > Understanding the memory barriers takes time to digest (which basically > means to develop a cpu simulator in ones head with speculative writes > and execution and then keep sanity when reasoning about them). Or simply using the memory model tool and just write a "simple" litmus test to see what's possible and what not in the given situation. (And no, I don't think it's that trivial to do that either :) ) >
