VOGONS


First post, by squareguy

User metadata
Rank Oldbie
Rank
Oldbie

When did games actually require or benefit from Hardware T&L? I cannot find a good list anywhere.

It looks like Quake 3 Arena can get a boost from having Hardware T&L and my K6-3 needs all the help it can get 🤣.

Any good examples of games? It seems most games did not make use of it at the time?

Gateway 2000 Case and 200-Watt PSU
Intel SE440BX-2 Motherboard
Intel Pentium III 450 CPU
Micron 384MB SDRAM (3x128)
Compaq Voodoo3 3500 TV Graphics Card
Turtle Beach Santa Cruz Sound Card
Western Digital 7200-RPM, 8MB-Cache, 160GB Hard Drive
Windows 98 SE

Reply 1 of 10, by leileilol

User metadata
Rank l33t++
Rank
l33t++

Use of it in OpenGL was "automatic", that is with cards' ICDs performing transform functions appropriately for their card. With Direct3D it had to be more manually implemented in most cases. Major D3D7 HWT&L games were more of a 2001/2002 thing, with very few early exceptions like Giants Citizen Kabuto

apsosig.png
long live PCem

Reply 2 of 10, by squareguy

User metadata
Rank Oldbie
Rank
Oldbie

Very interesting. I will look at OpenGL games that will benefit then.

Gateway 2000 Case and 200-Watt PSU
Intel SE440BX-2 Motherboard
Intel Pentium III 450 CPU
Micron 384MB SDRAM (3x128)
Compaq Voodoo3 3500 TV Graphics Card
Turtle Beach Santa Cruz Sound Card
Western Digital 7200-RPM, 8MB-Cache, 160GB Hard Drive
Windows 98 SE

Reply 3 of 10, by idspispopd

User metadata
Rank Oldbie
Rank
Oldbie

glQuake already benefits. See eg. Modern graphics on a 486
GF2MX is much faster than TNT with slower CPUs, even at low resolutions where the GPU is not the bottleneck.
Voodoo3 is about as fast than GF2MX, so at least for glQuake the lower driver overhead seems to be just as useful as hardware TnL.

Reply 4 of 10, by Scali

User metadata
Rank l33t
Rank
l33t

The technical background is like this:
In order to use hardware T&L, the GPU wants the data in ready-to-use buffers, preferably in video memory.
In DX6 and earlier, you passed a pointer from system memory to the DrawPrimitive() call directly.
So the driver didn't see the data until it was ready to draw. The problem here is that it is difficult to make any assumptions about this data. The same buffer can have different geometry for every call (the only way to tell would be to copy the whole buffer and compare it everytime, which is too costly), so there's no point in optimizing the data for GPU T&L and storing it in videomemory, because it may only be used once.

In DX7, Microsoft specified vertex buffers, which you had to lock before you could modify them. So now the API knew *exactly* when the buffer was being modified. Also, you could specify that you wanted your buffers stored in videomemory. So now the API had enough heuristics to know when data was static, and when it could be uploaded and optimized for GPU T&L.

In OpenGL, you had display lists, which already provided enough heuristics for optimizing the data for GPU T&L. Which meant that even legacy applications could automatically benefit from GPU T&L.
OpenGL later got vertex buffer objects similar to DX7 though, which gave more explicit control, for maximum efficiency.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 5 of 10, by Sammy

User metadata
Rank Oldbie
Rank
Oldbie

I think Operation Flashpoint gives you an Error if the GFX-Card do not support T&L.

Reply 6 of 10, by Jorpho

User metadata
Rank l33t++
Rank
l33t++

Tron 2.0 also requires hardware T&L.

(I think I remember coming across a wrapper at one point that would lie to programs expecting certain features and still let you run the game – though of course it would look pretty crappy.)

Reply 7 of 10, by leileilol

User metadata
Rank l33t++
Rank
l33t++
Sammy wrote:

I think Operation Flashpoint gives you an Error if the GFX-Card do not support T&L.

OpFlash has normal D3D and even Glide support (in addition to D3D HWT&L)

Jorpho wrote:

Tron 2.0 also requires hardware T&L.

HWT&L is required starting with Lithtech Jupiter (NOLF2)

apsosig.png
long live PCem

Reply 8 of 10, by squareguy

User metadata
Rank Oldbie
Rank
Oldbie

Very interesting information, thanks all.

Gateway 2000 Case and 200-Watt PSU
Intel SE440BX-2 Motherboard
Intel Pentium III 450 CPU
Micron 384MB SDRAM (3x128)
Compaq Voodoo3 3500 TV Graphics Card
Turtle Beach Santa Cruz Sound Card
Western Digital 7200-RPM, 8MB-Cache, 160GB Hard Drive
Windows 98 SE

Reply 9 of 10, by 386SX

User metadata
Rank l33t
Rank
l33t

It would be nice to discuss if the original T&L helped "only" the 500->1Ghz cpu generation to be theorically free for other computing task in games, even if it's hard to say. Cause D3D software T&L with newer cpu became probably faster?

Reply 10 of 10, by Scali

User metadata
Rank l33t
Rank
l33t
386SX wrote:

It would be nice to discuss if the original T&L helped "only" the 500->1Ghz cpu generation to be theorically free for other computing task in games, even if it's hard to say. Cause D3D software T&L with newer cpu became probably faster?

Not really. The T&L pipeline is highly optimized, and was a huge leap forward.
Perhaps there are edge cases, taking the slowest T&L cards ever made, and comparing them to the fastest CPUs... but in general T&L was so much faster, it was no contest.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/