kekko wrote:@TouchMonkey
So, you're saying that perspective correction can't work, because triangles are actually in 2d space, right? and the solution you implemented is to fix texturing pixel by pixel with a simple shader?
That's pretty much correct. You have a vertex with an X/Y value that specify a specific pixel, and a depth value that can be whatever the designer wanted and doesn't even have to related to other vertexes in any way. A shader isn't the ONLY solution, however it is the one I have code in front of me for.
When I originally wrote the PVR code I wasn't using a shader and I had managed to get textures working correctly. I just can't remember how I did it 😀 I needed to go to a shader so I could correctly model some blending modes that aren't available in OpenGL. Unfortunately I don't have a copy of the original source any more as I've changed revision control systems and my old OpenGL code never made it into the new system. I only have a single copy of my last OpenGL version before I moved everything over to DirectX 10. I'm not sure you'll run into that with the Voodoo, but the main reason I changed was because OpenGL is extremely restrictive about the depth buffer and finding a well-supported method of having a floating point depth buffer that wasn't retricted to 0-1 was not easy (or possible, really). DirectX is MUCH more flexible. But that's a different story.
But let's see if we can hack around this anyway.
The logic that I did in the vertex shader can just be done directly in your code. If you have a vertex, V, with an X Y and Z parameters and a texture T with a U and V parameter you could use the following code:
glTexCoord2f(T.U * V.Z, T.V * V.Z);
That does everything the one line of vertex shader did, it's just a multiplication of the UV coordinates by the depth.
Now for the hard part, trying to replace the fragment shader with the fixed function commands. I remember it taking a couple of tries to get working correctly before but it isn't impossible.
The diffculty comes in that there are a number of different texture samplers and I don't think the default implementation uses texture2Dproj(), I think it just calls texture2D(). There's also a texture2Drect() which is used when you specifically don't want any correction done.
As I said I can't remember what my final solution was but here's a couple of things to try.
Attempt 1: Try using the 4 parameter version of glTexCoord. The 3rd parameter is used for 3D textures, which you're not using, but the 4th parameter is used as a modifier for the other parameters. This may seem counter-intuitive but try modifying the glTexCoord command again as follows:
glTexCoord4f(T.U * V.Z, T.V * V.Z, 0.0f, 1.0f / V.Z);
Basically we're multiplying UV by the depth, the supplying the 4th parameter (i think it's T, as in UVST) as the inverse of the depth. What's funny is that U, V and S get multiplied by T which will undo everything but I think may work.
If that doesn't work, try switching the multiplies and the divide:
glTexCoord4f(T.U / V.Z, T.V / V.Z, 0.0f, V.Z);
While you're at it, might as well try these iterations as well:
glTexCoord4f(T.U, T.V, 0.0f, 1.0f / V.Z);
glTexCoord4f(T.U, T.V, 0.0f, V.Z);
I think the first one is the most likely to work.
If that doesn't work there's something else we can try.
Attempt 2: Change glTexCoord back to using the 2 coordinate version (with the multiplication) but change your glVertex commands to use the 4 parameter version. You would end up with:
glVertex4f(V.X, V.Y, V.Z, 1.0f / V.Z);
glTexCoord2f(T.U * V.Z, T.V * V.Z);
If the results aren't quite right, try dividing T.UV instead of multiplying, or just using the normal UV.
I'm not very confident about this one, Attempt 1 seems much more likely, but you can try it anyway.
kekko wrote:@TouchMonkey
What about transforming 2d vertex x,y coordinates back to 3d using inverse projection formula, then use glFrustrum (or gluPerspective) to represent triangles in 3d space? May it work?
Technically that could work but I have a feeling it wouldn't be worth the effort. To summarize, you're missing a lot of information, and an entire vertex parameter, that the original calculations used when positioning the vertex in 3D space. Trying to project all of the coordinates into your own 3D space, then have them moved back successfully into their original XY position by your own view/projection math, is extremely difficult and will break in a lot of cases where you have odd Z (depth) values. Not to mention you already have limited depth buffer precision so you want to avoid as much math as possible or pixels might start showing up in the incorrect order when polygons are close together in space.
Like I said though, that doesn't mean it can't be done it will just be hard and there are easier ways to go about fixing texture perspective issues.
I hope that all makes sense or at least provides some insight. This stuff is pretty complicated. I wish I still had my original code, let that be a lesson that you should always use a good revision control system and you should never throw away the history when moving things around.