Re: how to debug slow rbd block device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

>> So try enabling RBD writeback caching — see http://marc.info
>> /?l=ceph-devel&m=133758599712768&w=2
>> will test tomorrow. Thanks.
Can we path this to the qemu-drive option?

Stefan


Am 22.05.2012 23:11, schrieb Greg Farnum:
> On Tuesday, May 22, 2012 at 2:00 PM, Stefan Priebe wrote:
>> Am 22.05.2012 22:49, schrieb Greg Farnum:
>>> Anyway, it looks like you're just paying a synchronous write penalty
>>  
>>  
>> What does that exactly mean? Shouldn't one threaded write to four  
>> 260MB/s devices gives at least 100Mb/s?
> 
> Well, with dd you've got a single thread issuing synchronous IO requests to the kernel. We could have it set up so that those synchronous requests get split up, but they aren't, and between the kernel and KVM it looks like when it needs to make a write out to disk it sends one request at a time to the Ceph backend. So you aren't writing to four 260MB/s devices; you are writing to one 260MB/s device without any pipelining — meaning you send off a 4MB write, then wait until it's done, then send off a second 4MB write, then wait until it's done, etc.
> Frankly I'm surprised you aren't getting a bit more throughput than you're seeing (I remember other people getting much more out of less beefy boxes), but it doesn't much matter because what you really want to do is enable the client-side writeback cache in RBD, which will dispatch multiple requests at once and not force writes to be committed before reporting back to the kernel. Then you should indeed be writing to four 260MB/s devices at once. :)
> 
>>  
>>> since with 1 write at a time you're getting 30-40MB/s out of rados bench, but with 16 you're getting>100MB/s.
>>> (If you bump up past 16 or increase the size of each with -b you may  
>>  
>> find yourself getting even more.)
>> yep noticed that.
>>  
>>> So try enabling RBD writeback caching — see http://marc.info/?l=ceph-devel&m=133758599712768&w=2
>> will test tomorrow. Thanks.
>>  
>> Stefan  
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux