Re: [PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/17/2014 11:58 AM, Peter Zijlstra wrote:
On Thu, Apr 17, 2014 at 11:03:57AM -0400, Waiman Long wrote:
+static __always_inline void
+clear_pending_set_locked(struct qspinlock *lock, u32 val)
+{
+	struct __qspinlock *l = (void *)lock;
+
+	ACCESS_ONCE(l->locked_pending) = 1;
+}
@@ -157,8 +251,13 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
  	 * we're pending, wait for the owner to go away.
  	 *
  	 * *,1,1 ->  *,1,0
+	 *
+	 * this wait loop must be a load-acquire such that we match the
+	 * store-release that clears the locked bit and create lock
+	 * sequentiality; this because not all try_clear_pending_set_locked()
+	 * implementations imply full barriers.
You renamed the function referred in the above comment.


Sorry, will fix the comments.

-Longman

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux