Re: [PATCH 5/5] arm: exynos: Add MCPM call-back functions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dave,

On Thu, Apr 17, 2014 at 9:08 PM, Dave Martin <Dave.Martin@xxxxxxx> wrote:
> On Thu, Apr 17, 2014 at 04:20:26PM +0100, Nicolas Pitre wrote:
>> On Thu, 17 Apr 2014, Dave Martin wrote:
>>
>> > On Thu, Apr 17, 2014 at 12:41:22AM +0530, Abhilash Kesavan wrote:
>> >
>> > [...]
>> >
>> > > > The fact that there is no C interface for enabling ACE ports is
>> > > > deliberate.  For CPUs connected to ACE and managed via MCPM,
>> > > > it is incorrect to enable CCI via C code, since the safe window
>> > > > is the window during which all outbound CPUs have reached CLUSTER_DOWN
>> > > > and all inbound CPUs have not turned their MMU on yet (and thus cannot
>> > > > execute any general Linux C code).
>> > > >
>> > > > There might be scenarios involving GPUs and other non-CPU devices
>> > > > connected to ACE ports where the device cannot enable CCI snoops
>> > > > for itself -- but this would require a holding-pen protocol to enable
>> > > > the device to wait and signal a CPU to enable CCI snoops on the device's
>> > > > behalf before the device proceeds.  It is not the correct solution for
>> > > > CPU clusters attached to ACE, precisely because we can be more efficient
>> > > > in that case.
>> > > >
>> > > > In fact, because you implement a power_up_setup method that calls
>> > > > cci_enable_port_for_self, CCI snoops are actually enabled twice, making
>> > > > the above code appear redundant.   Have I missed something?
>> > > When a cluster is being turned off the snoops for both the clusters
>> > > are being turned off. When the other cluster comes back the snoops are
>> > > being turned on for the incoming cluster via power_up_setup and here
>> > > for the other cluster. As previously mentioned, I will be dropping
>> > > this change.
>> >
>> > That's a fair point.  If there is only one cluster alive, turning off
>> > snoops for it should be safe, because there is no second cluster for
>> > it to maintain coherency with.
>>
>> But that's not that simple as I explained in a previous email.  If the
>> other cluster has gone down via cpuidle, it may come back up at any
>> moment without warning.  We do have the infrastructure in place to cope
>> with possible races handling the CCI within a cluster.  We do not have
>> anything for cross cluster races.  And before we do, it is necessary to
>> know if it is worth it.
>
> Agreed.  It could be done, perhaps by the approach I already considered
> for handling multilevel clusters, but it is far from trivial, and I
> would like to see some measurement of the potential benefit before
> getting into it.
I am afraid I do not have any power numbers for this change.

Next version of the patches will be posted soon.

Regards,
Abhilash
>
> Cheers
> ---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@xxxxxxxxxxxxxxxxxxx
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel




[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [CentOS ARM]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]     [Photos]