On Tue, 25 Mar 2003, Keith Packard wrote: > Around 18 o'clock on Mar 25, Mark Vojkovich wrote: > > > The bigger problems were things like repeat, which becomes > > difficult if the hardware doesn't support non-power-of-two textures, > > which, as far as I can tell still includes alot of hardware. > > Maybe Longhorn will change this. I don't know. > > 2D apps still use repeated patterns for stuff, so it's hard to just throw > that capability out. Just as the current server can accelerate some tiles > better than others, Render will likely run faster with repeated patterns > of certain sizes. Perhaps a QueryBestSize function should exist to let > apps know what "good" sizes might be, but I think we've seen how little > apps even bother asking. Apps don't bother asking. If it's in the API they'll use it. Maybe the hardware will get better at this due to Longhorn. Hard to say at this point. > > > Also the union of Drawable and Picture presents some problem. > > That's because many hardware don't support linear formats for > > textures or drawables, or if they do, they support them with > > limited functionality. > > I guess I'm confused by this -- the 'Picture' is just a way to wrap a > drawable up with some rendering state and a mapping of pixel values to RGB > values. It doesn't imply any kind of linear access, of course if you do > want to render with software, it will probably be easier if there's a > linear mode available. The problem is that many hardware need to deal with textures as opaque objects. If you can't do everything in hardware, then you need to do some stuff in software which implies that certain things about textures are transparent, including how the CPU can address them, which is most likely the things that aren't going to be transparent. I'm not saying there was a better solution to the problem, just that there is, well, a problem. > > Overall I think the non rectangular geometry in render > > presents the biggest problem. You're pretty much going to be > > rendering masks in software that you can pass to the hardware > > for composite. > > Even the polygons permit imprecise rendering where the alpha values can > differ by a certain amount as long as the a few invarients are held. > Any technique that is equivalent to some kind of oversampling should work > just fine. I wasn't following the details to closely, but I remember discussion about how primitives were joining in order to avoid cracks. You needed some absurd amount of precision to do that. That's all going to end up in software, imprecise or not. Well anyhow, the next NVIDIA binary drivers will take a crack at better acceleration. GeForce GPUs should be able to do rectangular composites for the first 14 blend ops with the following sources, masks and destinations: DST: PICT_a8r8g8b8 PICT_x8r8g8b8 PICT_r5g6b5 PICT_x1r5g5b5 SRC: PICT_x8r8g8b8 PICT_a8r8g8b8 PICT_r5g6b5 PICT_x1r5g5b5 PICT_a1r5g5b5 MASK: PICT_a1 PICT_a4 PICT_a8 PICT_x8r8g8b8 PICT_a8r8g8b8 PICT_r5g6b5 PICT_x1r5g5b5 PICT_a1r5g5b5 Composite alpha is supported. Some formats are accelerated to a lesser degree than others. All current rendering should match precise, but there's no support for transforms at the moment. If both src and mask are repeat it can't be accelerated unless one of them 1x1 (I suppose you can guess how I implemented repeat). Additionally, GeForce3 and GeForce4 Ti, GeForce FX add PICT_x8b8g8r8 and PICT_a8b8g8r8 formats for sources and masks. GeForce FX should be able to do alot more stuff but I haven't bothered looking into it because I can't test most of what I've already written. Needless to say, this stuff is off by default due to the test and verification problem. Mark.