Re: vhost-blk development

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In this particular case, I did intend to deploy these instances directly to 
the ramdisk.  I want to squeeze every drop of performance out of these 
instances for use cases with lots of concurrent accesses.   I thought it 
would be possible to achieve improvements an order of magnitude or more 
over SSD, but it seems not to be the case (so far).  

I am purposefully not using O_DIRECT since most workloads will not be using 
it, although I did notice better performance when I did use it.  I did
already identify the page cache as a hinderance as well.

I seem to have hit some performance ceilings inside of the kvm guests that 
are much lower than that of the host they are running on.  I am seeing a 
lot more interrupts and context switches on the parent than I am in the 
guests, and I am looking for any and all ways to cut these down.  

I had read somewhere that vhost-blk may help.  However, those patches were 
posted on qemu-devel in 2010, with some activity on LKML in 2011, but not 
much since.  I feared that the reason they are still not merged might be 
bugs, incomplete implementation, or something of the sort.  

Anyhow, I thank you for your quick and timely responses.  I have spent some 
weeks investigating ways to boost performance in this use case and I am 
left with few remaining options.  I hope I have communicated clearly what I 
am trying to accomplish, and why I am inquiring specifically about vhost-blk.  

Regards,

-Mike


----- Original Message -----
From: "Stefan Hajnoczi" <stefanha@xxxxxxxxx>
To: "Michael Baysek" <mbaysek@xxxxxxxxxxxxx>
Cc: kvm@xxxxxxxxxxxxxxx
Sent: Wednesday, April 11, 2012 3:19:48 AM
Subject: Re: vhost-blk development

On Tue, Apr 10, 2012 at 6:25 PM, Michael Baysek <mbaysek@xxxxxxxxxxxxx> wrote:
> Well, I'm trying to determine which I/O method currently has the very least performance overhead and gives the best performance for both reads and writes.
>
> I am doing my testing by putting the entire guest onto a ramdisk.  I'm working on an i5-760 with 16GB RAM with VT-d enabled.  I am running the standard Centos 6 kernel with 0.12.1.2 release of qemu-kvm that comes stock on Centos 6.  The guest is configured with 512 MB RAM, using, 4 cpu cores with it's /dev/vda being the ramdisk on the host.

Results collected for ramdisk usually do not reflect the performance
you get with a real disk or SSD.  I suggest using the host/guest
configuration you want to deploy.

> I've been using iozone 3.98 with -O -l32 -i0 -i1 -i2 -e -+n -r4K -s250M to measure performance.

I haven't looked up the options but I think you need -I to use
O_DIRECT and bypass the guest page cache - otherwise you are not
benchmarking I/O performance but overall file system/page cache
performance.

Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux