Google
  Web www.spinics.net

Re: CPU topology 'sockets' handling guest vs host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


On Mon, Mar 26, 2012 at 16:11:08 +0100, Daniel P. Berrange wrote:
> On Mon, Mar 26, 2012 at 05:08:05PM +0200, Jiri Denemark wrote:
> > On Mon, Mar 26, 2012 at 15:42:58 +0100, Daniel P. Berrange wrote:
> > > 
> > > So, the XML checker is mistaking 'sockets' as the total number of sockets,
> > > rather than the per-node socket count. We need to fix this bogus check
> > 
> > I guess what we actually want to do is to report total number of sockets in
> > host cpu xml. Sockets per NUMA node has been proven to be a bad decision and
> > we should not let it infect other areas.
> 
> No, we can't change that - we explicitly fixed that a while back
> because it breaks the VIR_NODEINFO_MAXCPUS macro to do that.
> 
> 
> commit ac9dd4a676f21b5e3ca6dbe0526f2a6709072beb
> Author: Jiri Denemark <jdenemar@xxxxxxxxxx>
> Date:   Wed Nov 24 11:25:19 2010 +0100
> 
>     Fix host CPU counting on unusual NUMA topologies

Yes, this is the proof of "sockets per node is a bad thing" I was referring to
:-) That design broke on some funky NUMA topologies where NUMA nodes where not
composed of sockets (they were rather composed of cores). The ideal fix would
have been to provide total number of sockets but we couldn't do it because of
VIR_NODEINFO_MAXCPUS macro used nodes * sockets in its computation. So
instead, we now pretend (in the nodeinfo structure) there is just 1 NUMA node
but for better backward compatibility, we only do so when we can't just divide
sockets by nodes.

We should have more freedom to fix the XML since (and provide total number of
sockets there) since we don't have any macro that would take the XML and
compute total number of CPUs from it.

We should really try hard to avoid using sockets per node in guest XML since
that would unnecessarily limit usable topologies. One way is to make CPU
topology completely separate from NUMA, i.e., sockets would mean total number
of sockets the guest will see (I deliberately used topology that cannot be
represented with sockets per node semantics):

   <vcpu>12</vcpu>
   <cpu>
     <topology sockets='3' cores='4' threads='1'/>
     <numa>
       <cell cpus='0-2' memory='512000'/>
       <cell cpus='3-5' memory='512000'/>
       <cell cpus='6-8' memory='512000'/>
       <cell cpus='9-11' memory='512000'/>
     </numa>
   </cpu>

or, alternatively, make CPU topology describe only a single socket (i.e.,
sockets must always be 1) and total number of CPUs would only be provided by
<vcpu/>:

   <vcpu>12</vcpu>
   <cpu>
     <topology sockets='1' cores='4' threads='1'/>
     <numa>
       <cell cpus='0-2' memory='512000'/>
       <cell cpus='3-5' memory='512000'/>
       <cell cpus='6-8' memory='512000'/>
       <cell cpus='9-11' memory='512000'/>
     </numa>
   </cpu>

I think the latter (sockets is always 1) is not good either since that would
be incompatible with numerous existing guest XMLs and it is also harder to
deal with within libvirt.

Jirka

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list


[Virt Tools]     [Libvirt Users]     [Fedora Users]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]

Powered by Linux

Google
  Web www.spinics.net