Re: [Xen-devel] [PATCH 0001/001] xen: multi page ring support for block devices
|[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]|
On Mar 7, 2012 4:33 AM, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> >>> On 06.03.12 at 18:20, Konrad Rzeszutek Wilk <konrad@xxxxxxxxxx> wrote:
> > -> the usage of XenbusStateInitWait? Why do we introduce that? Looks
> > like a fix to something.
> No, this is required to get the negotiation working (the frontend must
> not try to read the new nodes until it can be certain that the backend
> populated them). However, as already pointed out in an earlier reply
> to Santosh, the way this is done here doesn't appear to allow for the
> backend to already be in InitWait state when the frontend gets
> > -> XENBUS_MAX_RING_PAGES - why 2? Why not 4? What is the optimal
> > default size for SSD usage? 16?
> What do SSDs have to do with a XenBus definition? Imo it's wrong (and
> unnecessary) to introduce a limit at the XenBus level at all - each driver
> can do this for itself.
The patch should mention what the benefit of multi ring is.
> As to the limit for SSDs in the block interface - I don't think the number
> of possibly simultaneous requests has anything to do with this. Instead,
> I'd expect the request number/size/segments extension that NetBSD
> apparently implements to possibly have an effect.
.. which sounds to me like increasing the bandwidth of the protocol. Should be mentioned somewhere in the git description.
_______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization