Re: disk i/o benchmarking inside vm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/22/2011 06:00 PM, Dennis Jacobfeuerborn wrote:
> Running iostat shows no I/O activity when running the tests for /dev/vdb
> which explains the insane numbers. The question is why I get such different
> results when both devices are defined exactly the same way?

I'm not very familiar with KVM yet (got my first real lesson today), but 
I notice you said:
"In the guest the drive are running using virtio with type=raw and 
cache=none." Are these KVM settings, or did you use kernel parameters on 
the guest machine?

Also, what about the elevator (i/o scheduler) in the guest? In a VMware 
Server2 host (on Centos, so I'm not far OT) it's best to use the 
elevator=noop parameter. I wouldn't expect the elevator to skew results 
quite as much as you're seeing, but what do I know? ;)

-- 
-Eric 'shubes'

_______________________________________________
CentOS-virt mailing list
CentOS-virt@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos-virt


[Index of Archives]     [CentOS Users]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [X.org]     [Xfree86]     [Linux USB]

  Powered by Linux