- Subject: Re: [Xen-devel] [PATCH 0001/001] xen: multi page ring support for block devices
- From: "Jan Beulich" <JBeulich@xxxxxxxx>
- Date: Wed, 07 Mar 2012 09:33:26 +0000
- Cc: "David Vrabel" <david.vrabel@xxxxxxxxxx>, "Ian Campbell" <Ian.Campbell@xxxxxxxxxx>, "Paul Durrant" <Paul.Durrant@xxxxxxxxxx>, "waldi@xxxxxxxxxx" <waldi@xxxxxxxxxx>, "weiyi.huang@xxxxxxxxx" <weiyi.huang@xxxxxxxxx>, "jeremy@xxxxxxxx" <jeremy@xxxxxxxx>, "akpm@xxxxxxxxxxxxxxxxxxxx" <akpm@xxxxxxxxxxxxxxxxxxxx>, "virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx" <virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>, "joe.jin@xxxxxxxxxx" <joe.jin@xxxxxxxxxx>, "konrad.wilk@xxxxxxxxxx" <konrad.wilk@xxxxxxxxxx>, "lersek@xxxxxxxxxx" <lersek@xxxxxxxxxx>, "rusty@xxxxxxxxxxxxxxx" <rusty@xxxxxxxxxxxxxxx>, "dgdegra@xxxxxxxxxxxxx" <dgdegra@xxxxxxxxxxxxx>, "linux-kernel@xxxxxxxxxxxxxxx" <linux-kernel@xxxxxxxxxxxxxxx>, "linux-pci@xxxxxxxxxxxxxxx" <linux-pci@xxxxxxxxxxxxxxx>, "netdev@xxxxxxxxxxxxxxx" <netdev@xxxxxxxxxxxxxxx>, "jbarnes@xxxxxxxxxxxxxxxx" <jbarnes@xxxxxxxxxxxxxxxx>, "paul.gortmaker@xxxxxxxxxxxxx" <paul.gortmaker@xxxxxxxxxxxxx>
- In-reply-to: <CAPbh3rsExLtohBwVd_scYuO=GN1iZE5egQQ3x5M59YUno5Rtyw@mail.gmail.com>
- References: <firstname.lastname@example.org> <7914B38A4445B34AA16EB9F1352942F1010A1FA12364@SJCPMAILBOX01.citrite.net> <CAPbh3rsExLtohBwVd_scYuO=GN1iZE5egQQ3x5M59YUno5Rtyw@mail.gmail.com>
>>> On 06.03.12 at 18:20, Konrad Rzeszutek Wilk <konrad@xxxxxxxxxx> wrote:
> -> the usage of XenbusStateInitWait? Why do we introduce that? Looks
> like a fix to something.
No, this is required to get the negotiation working (the frontend must
not try to read the new nodes until it can be certain that the backend
populated them). However, as already pointed out in an earlier reply
to Santosh, the way this is done here doesn't appear to allow for the
backend to already be in InitWait state when the frontend gets
> -> XENBUS_MAX_RING_PAGES - why 2? Why not 4? What is the optimal
> default size for SSD usage? 16?
What do SSDs have to do with a XenBus definition? Imo it's wrong (and
unnecessary) to introduce a limit at the XenBus level at all - each driver
can do this for itself.
As to the limit for SSDs in the block interface - I don't think the number
of possibly simultaneous requests has anything to do with this. Instead,
I'd expect the request number/size/segments extension that NetBSD
apparently implements to possibly have an effect.
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
[Linux USB Devel]
[Video for Linux]
[Linux Audio Users]
[Free Online Dating]