Re: How can a bridge be optimized?
|[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]|
On Tue, Jun 12, 2012 at 08:07:44AM -0400, Whit Blauvelt wrote: > On Tue, Jun 12, 2012 at 11:35:28AM +0400, Andrey Korolyov wrote: > > > Just a stupid question: did you pin guest vcpus or NIC` hw queues? > > Unpinned vm may seriously affect I/O performance when running on same > > core set as NIC(hwraid/fc/etc). > > Thanks. No question is stupid. Obviously, this shouldn't be so slow at VM > I/O so I'm missing something. In my defense, there is no single coherent set > of documents on this stuff, unless those are kept in a secret place. It > would be a fine thing if a few of the people who know all the "obvious" > stuff about libvirt-based KVM configuration would collaborate and document > it fully somewhere. > > When I Google "pinned vcpu" all the top responses are about Xen. I run KVM. > I find mention that "KVM uses the linux scheduler for distributing workload > rather than actually assigning physical CPUs to VMs," at > http://serverfault.com/questions/235143/can-i-provision-half-a-core-as-a-virtual-cpu. It is possible to pin to pCPUs in KVM using the same XML syntax as with Xen. First you can specify the overall VM's CPU affinity: <vcpu cpuset='1,2'/>4</vcpu> This creates a 4 vCPU guest, where all the KVM threads are restricted to pCPUs 1 & 2. You can go further are lock down individual vCPUs to individual pCPUs by adding something like this: <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='2'/> </cputune> While it is true that in general the Linux scheduler does a reasonable job of managing vCPUs, it is definitely possible to improve things by using explicit pinning, particularly if you are producing formal benchmarks. The downside of pinning is that if you have very variable workloads, you may lower your overall utilization by not letting the kernel move threads about on demand. If your host machine has multiple NUMA nodes, it is well worth trying to pin a VM as a whole (<vcpu>) so that it fits inside 1 single NUMA node, but leave individual vCPUs free to float on pCPUs in that single NUMA node. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| _______________________________________________ libvirt-users mailing list libvirt-users@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvirt-users
[Virt Tools] [Lib OS Info] [Fedora Users] [Fedora Maintainers] [Fedora Desktop] [Fedora SELinux] [Big List of Linux Books] [Yosemite News] [Yosemite Photos] [KDE Users] [Fedora Tools]