Re: Object Write Latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/23/2013 10:38 AM, Andreas Joachim Peters wrote:
We deployed 3 OSDs with an EXT4 using RapidDisk in-memory.

The FS does 140k/s append+sync and the latency is now:

~1 ms for few byte objects with single replica
~2 ms for few byte objects three replica  (instead of 65-80ms)

Interesting! If you look at the slowest operations in the ceph admin socket now with dump_historic_ops, where are those operations spending their time?


This gives probably the base-line of the best you can do with the current implementation.

==> the 80ms are probably just a 'feature' of the hardware (JBOD disks/controller) and we might try to find some tuning parameters to improve the latency slightly.

Hardware definitely plays a huge part in terms of Ceph performance. You can run Ceph on just about anything, but it's surprising how different two roughly similar systems can perform.


Could you just explain how the async api functions (is_complete, is_safe) map to the three states

1) object is transferred from client to all OSDs and is present in memory there
2) object is written to the OSD journal
3) object is committed from OSD journal to the OSD filesystem

Is it correct that the object is visible by clients only when 3) has happened?

Yes, afaik.


Thanks for your help,
Andreas.




--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux