Re: IRQ affinity enforced only after first interrupt.
|[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]|
On Wed, 4 Apr 2012, Alexander Gordeev wrote: > On Mon, Mar 26, 2012 at 09:04:20PM +0200, Thomas Gleixner wrote: > > On Mon, 26 Mar 2012, Yevgeny Petrilin wrote: > > > > > > > > > > The architecture specific code will determine whether the IRQ could be migrated > > > > in process context. For example, the IRQ_MOVE_PCNTXT flag will be set on x86 > > > > systems if interrupt remapping is enabled. > > > > > > Actually I am encountering this issue with x86, and see different > > > behavior with different HW devices (NICs). On same machine I have > > > one device that responds immediately to affinity changes while the > > > other one changes the affinity only after first interrupt. > > > > That simply depends on the underlying hardware. On certain hardware we > > can change the affinity only in hard interrupt context, that means > > right when a interrupt of that device is delivered. > > > > On the other devices we can change it right away and the corresponding > > interrupt chips set IRQ_MOVE_PCNTXT to indicate that. > > Actually, even with IRQ_MOVE_PCNTXT capable chips, a hardware handler still > might be called on a core that belongs to old affinity, after the successful > write of new affinity. Threaded handlers are also racy with irq affinity > updates. > > If that is inconsistency, bug or design? Well, irq affinity updates are not designed to be immediate. There is no point in doing so. > > There is nothing we can do about this. It's dictated by hardware. > > May be we could wait for desc->pending_mask to be cleared before returning from > irq_set_affinity()? If that device does not issue an interrupt for a long time, e.g. because the interface is down, then you are stuck there forever. What's the point of this? One interrupt on the wrong core is nothing we need to worry about. Thanks, tglx -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html