VOGONS


Reply 80 of 103, by Hoping

User metadata
Rank Oldbie
Rank
Oldbie

And here we are again, the first Unreal Engine was already a few years old in 2000.
To say that a few games took advantage of GLide in the year 2000, is nothing to be happy about, it is a symptom of the agony of 3dfx.
In 2000 DX7 with HT&L and in 2001 DX8 with programmable pixel shaders and vertex shaders, and the Voodoo cards never had those features, and the modern reproductions also don't have then, how could they?
I know that to say that the Voodoo cards were better than the others, fans will look for any detail, but the past cannot be changed, nor can history be rewritten. And that happens to many of us.
I wonder how GLide contributed to the development of D3D and OpenGL, which are APIS that still exists, maybe someone can provide information in that regard. Or maybe everything that OpenGL and D3D are had nothing to do with GLide and so GLide has only been a blip instead of an improvement and did not contribute anything to the future.
A more modern example is AMD's Mantle, which was also only used by some games. AMD donated the Mantle API to the Kronos group and from there Vulkan was developed, and many games use Vulkan today.
We will see what the future holds for Vulkan.

Reply 81 of 103, by hornet1990

User metadata
Rank Newbie
Rank
Newbie

Glide didn’t influence D3D or GL. GL existed well before glide. D3D (up to 11) and GL are higher level hardware agnostic interfaces which do a lot of grunt work for the developer (managing textures and other buffers in video memory etc) whereas glide, MeTaL, powerSGL et al. were all tied to specific hardware and closer to the metal which in theory means faster performance, or at a minimum exposing functionality that couldn’t be through D3D or GL at the time.

DX12 and Vulkan are somewhere in between.

Reply 82 of 103, by Hoping

User metadata
Rank Oldbie
Rank
Oldbie

So in the end GLide had no relevance for the future because the APIs that lasted didn't take advantage of anything from Glide, I'm a little disappointed considering how much people talk about the wonders of GLide, what made GLide different from other APIs proprietary features of the time was the union of good features, good performance and more exclusive games.
But in the end it was nothing more than another proprietary API that remained in the past because they wanted to monopolize the market even though there were bigger players, especially Microsoft, which promoted D3D very strongly, being a more open API, and OpenGL, which was also more open and had a strong professional market; while only 3dfx cards could use GLide, although I don't know if 3dfx would allow others to manufacture chips that could use GLide, I imagine not, since with the purchase of STB their path was completely the opposite, they only thought of a monopoly that was only in his dreams in light of what happened.
I just realized I've never thought about it this way, and now I find it disappointing, how 3dfx couldn't see where things were going when Nvidia released TNT.

Reply 83 of 103, by DrAnthony

User metadata
Rank Newbie
Rank
Newbie

I don't think it's entirely fair to say that Glide was just an attempt at gaining/maintaining a monopoly. At the release of the original Voodoo (and to a somewhat lesser degree the Voodoo II) the vendor agnostic APIs were a hot mess. Glide was a revelation in comparison and really served a functional purpose. I don't think we would have seen D3D improve anywhere near as fast if it wasn't for Glide, although it's possible that the industry might have rallied around OpenGL but it was fairly esoteric and wasn't designed with gaming in mind. Heck, maybe a Vulkan like split might have occurred in the 90's..

Reply 84 of 103, by hornet1990

User metadata
Rank Newbie
Rank
Newbie
DrAnthony wrote on 2024-04-23, 23:25:

I don't think it's entirely fair to say that Glide was just an attempt at gaining/maintaining a monopoly. At the release of the original Voodoo (and to a somewhat lesser degree the Voodoo II) the vendor agnostic APIs were a hot mess. Glide was a revelation in comparison and really served a functional purpose. I don't think we would have seen D3D improve anywhere near as fast if it wasn't for Glide, although it's possible that the industry might have rallied around OpenGL but it was fairly esoteric and wasn't designed with gaming in mind. Heck, maybe a Vulkan like split might have occurred in the 90's..

I don't think D3D's rate of improvement was related to glide but a consequence of the market at the time. You have to remember D3D is controlled by a single company who at the time their stated goal was to get Windows on every desktop in the world, and gaming was one avenue towards that, particularly in the home. As a primarily software company they had a vested interest in listening to developers telling them what the pain points were and trying to address them in the next version, whilst simultaneously having umpteen different vendors approaching them saying their next hardware was going to have x/y/z new functionality that they wanted exposed in D3Dnext. Hence we saw fairly rapid development of D3D.

This ultimately led to the mess that was pixel shaders where you had a version per vendors hardware and none of them were quite compatible with each other. This was a pain point for developers so for D3D9 and Shader Model 2 Microsoft took the reins and decreed "thou shalt follow our specification". By that time however many of the hardware vendors had either ceased to exist (3dfx, S3) or dropped out of the desktop market place (PowerVR), and MS had much improved marketplace clout so they could dictate terms. Progress then naturally slowed and became more evolutionary rather than revolutionary as fewer vendors consolidated around fairly standardised hardware functionality.

In contrast OpenGL was created by a consortium of hardware vendors who's interest was following where the money in 3D was, which in the early 90's when GL was created were visualisation, simulation and CAD. GL 1.x contexts had a lot of functionality that was useful to those industries but not gaming, yet had to implemented in the ICD even if it was a slow software emulation. Those industries were also intollerant of crashes so GL did a lot more validation of input compared to D3D which naturally lead to slower performance (I was shocked at how much code and how many layers you went through before you even got close to rasterization in the reference ICD implementation).

As with anything driven by committee they were slow to adapt to gaming, or in fact any change. Getting extensions to the spec approved was painfully slow, and getting vendor specific extensions approved nigh on impossible unless it was already a defacto standard in hardware (s3tc texture compression for example). Managing all the different extensions was a nightmare for developers, but as hardware vendors they didn't care about that as much. By the time they finally tackled the problem and made GL more viable for games development we were already approaching/in the post-D3D9 world.

As I mentioned before drivers over the years have done an increasing amount of work for developers (resource management etc.) which for most use cases is generally acceptable, but Mantle and ultimately Vulkan/D3D12 were born out of requests to give developers fine grained control in order to unlock that bit extra performance by tailoring to their specific use case. Unfortunately as we've seen not all dev's have the knowledge or resources to exploit that. But could such a thing have occurred in the 90's? I'd argue probably not - we already had a profusion of similar concept API's with glide, powerSGL, MeTaL etc. for Windows which all had largely similar functionality albeit differing in the detail, but developer knowledge and resources were still the same constraint then as it is now, even if they had consolidated into one API. Look at any of the lists on this forum for titles that used any of those API's and they're all relatively small (even glide) compared to the sheer number of titles that primarily used D3D or GL.

Reply 85 of 103, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Hoping wrote on 2024-04-23, 22:33:

So in the end GLide had no relevance for the future because the APIs that lasted didn't take advantage of anything from Glide, I'm a little disappointed considering how much people talk about the wonders of GLide, what made GLide different from other APIs proprietary features of the time was the union of good features, good performance and more exclusive games.
But in the end it was nothing more than another proprietary API that remained in the past because they wanted to monopolize the market even though there were bigger players, especially Microsoft, which promoted D3D very strongly, being a more open API, and OpenGL, which was also more open and had a strong professional market; while only 3dfx cards could use GLide, although I don't know if 3dfx would allow others to manufacture chips that could use GLide, I imagine not, since with the purchase of STB their path was completely the opposite, they only thought of a monopoly that was only in his dreams in light of what happened.
I just realized I've never thought about it this way, and now I find it disappointing, how 3dfx couldn't see where things were going when Nvidia released TNT.

I think you are approaching this from the wrong perspective.

DirectX/3D was a platform-dependant, hardware-indepedent attempt to forward MS interests in an all encompasing development environment for Windows and Windows only. Games being a part of the ever growing multi-media experience was a big selling point and a pivotal component of Windows then.

OpenGL was/is a platform-independent, hardware independent specification that vendors could create hardware for, and developers write against. It was governed by a collective (OpenGL ARB), of which Microsoft was actually a member in the early days. It's remit, was to provide an industry directed specification for 3rd party vendors to implement which has to cater to all disciplines and industries, not just 3D 'multi-media'/games.

GLide was a hardware-dependant, platform-independent API for 3dfx hardware only. 3dfx came up with the hardware, and in order for 3rd parties to use it there needs to be an API. Originally 3dfx did license GLide hardware for the original Voodoos (hence all the difference Voodoo 1/2 manufacturers), but then for whatever reason decided to take all hardware manufacturing 'in-house' via the STB acquisition. So all along, GLide was designed to be tighly coupled to 3dfx hardware, not to be some all encomposing industry standard API... it just became 'industry standard' briefly due to the popularity of the Voodoo.

Also, it's probably relevent to remember how and why 3dfx came about... it was founded by ex-SGI engineers (who already had experience and knowledge of IrisGL and OpenGL so knew exactly what "TnL" was), they spotted a gap in the market for cut-down hardware focusing on texturing and rasterisation which could be optimised for low-poly, high-fillrate, fast, textured graphics (aka games). The hardware processing of the SGI's Geometry Engine (which traditionally performed 'hardware' transform), was something to be rid of in the pipeline because it's expensive silicon which, while helpful for pushing large vertex sets and meshes (required in CAD/CAM/Simulation etc), was overkill for low-poly scenarios. What gamers wanted was "high res" (relatively speaking), good looking, repsonsive polys on the screen so things like texture filtering, multi-texturing and fog effects are suddenly a lot more important. This started pushing API's and hardware in a slightly different direction to the all encompasing, complex, expensive hardware to satisfy the OpenGL spec.

Later, some developers wanted to break out of this GLide/DirectX vendor lock-in so decided to use OpenGL, however given the reasons above, most of the features of GL were not required so "MiniGL's" could implement the limited OpenGL calls and features that some games used. This allowed hardware vendors to support OpenGL games without having to provide a full/complex OpenGL spec ICD (hardware and software).

I will also add that all this hardwrae TnL pantomine is a by-product of the PC/DirectX world. As mentioned the "Geometry Engine" was a dedicated hardware subsystem which performed MVP operations on 'grouped' vertex sets (display lists) from almost day 1 on SGI machines. Arcade hardware such as Sega's Model 1 and Namco system 21 also did this in hardware via their custom ASIC's (but had very specific requirments and closed development environments).

OpenGL being a specification, doesn't mandate what should be hardware accelerated or not. The implementation is left for hardware vendors to implement and provide via their ICD. Also being an open, industry lead specificiation, it attempts to satisfy requirements from many different disciplines and so not only it was very difficult to provide entire full-spec OpenGL implmenetations, in doing so there was a lot of baggage with a full-spec OpenGL ICD, so the API design became a limiting factor in performance for games.

As with everything though, processes mature and requirements evolve... this also happened with games suddenly wanting more detailed meshes, higher vertex counts and better control on a per-pixel basis (shaders) which meant that hardware TnL and programmable stages suddenly became, not only cheaper to implement in silicon, but acutally mandatory. This kinda was the opposite of 3dfx's initial strategy, and if anything meant that consumer gaming hardware design was now converging on the original principles that 3dfx circumvented in order to grasp it's original market.

Reply 86 of 103, by DrAnthony

User metadata
Rank Newbie
Rank
Newbie

Did some more digging and found a repost of an article I remembered from 2000.

https://www.beyond3d.com/content/articles/50/

Pretty well written and explained the limitations of the testing but further validates the "hardware T&L didn't outperform widely available CPUs of the time and many of us were aware of it" point I mentioned earlier. Also it's not that long of a read so don't dismiss it after the first page.

Reply 87 of 103, by eddman

User metadata
Rank Member
Rank
Member

That's a synthetic test and with a geforce 256 only. The article itself mentions that the performance in games would be different because the CPU has to do much more than just computing T&L.

Offloading those computations to the GPU should free up the CPU to do other calculations and probably increase the overall performance.

There needs to be a test of multiple games, with other HW T&L cards too than just the 256, to draw a clear picture.

Reply 88 of 103, by rasz_pl

User metadata
Rank l33t
Rank
l33t

Tests from 1999-2000 show minimal % difference when going from high to low end CPU in real games on GF256. That first T&L implementation wasnt hugely helping even on low end Celerons.
Compare GF245 SDR and Savage 2000 in Q3 with Celeron 366 vs P3 700 https://www.anandtech.com/show/429/5 https://www.anandtech.com/show/429/10 Difference is +5% Celeron, -5% P3 so Savage speed up hmm 10% with more CPU available versus hardware T&L? Not good not terrible considering GF256 MSRP was same as earlier TNT2 Ultra.

AT&T Globalyst/FIC 486-GAC-2 Cache Module reproduction
Zenith Data Systems (ZDS) ZBIOS 'MFM-300 Monitor' reverse engineering

Reply 89 of 103, by Dothan Burger

User metadata
Rank Member
Rank
Member
rasz_pl wrote on 2024-05-01, 08:38:

Tests from 1999-2000 show minimal % difference when going from high to low end CPU in real games on GF256. That first T&L implementation wasnt hugely helping even on low end Celerons.
Compare GF245 SDR and Savage 2000 in Q3 with Celeron 366 vs P3 700 https://www.anandtech.com/show/429/5 https://www.anandtech.com/show/429/10 Difference is +5% Celeron, -5% P3 so Savage speed up hmm 10% with more CPU available versus hardware T&L? Not good not terrible considering GF256 MSRP was same as earlier TNT2 Ultra.

It says on the Test page that they just ran the Quake III Arena demo001.dm3.

Reply 90 of 103, by Bruno128

User metadata
Rank Member
Rank
Member
spiroyster wrote on 2024-04-24, 09:43:

decided to take all hardware manufacturing 'in-house' via the STB acquisition

In mid-2000 when it was a sinking ship already they rolled back on this (out of desperation probably) and let PowerColor make retail VSA-100 cards with non reference design.

SBEMU compatibility reports list | Navigation thread


Now playing:
Gold Rush: My VLB 486 (now with SC-55)
Baldur's Gate: Bridging compatibility gap in this year 2000 build
Arcanum: Acrylic 2003 build (January 2024)

Reply 91 of 103, by leileilol

User metadata
Rank l33t++
Rank
l33t++

is there a 'hwtnl is a lie' going on here because please do tell that to the v5/intel/tnt2/savage/kyro gamers in late 2002 who can't play bf1942

Also bear in mind, all of Quake3's transforms and lighting are entirely software calculated (regardless of hardware - this is by design) and most enthusiast sites never realized that then (nor know about r_primitives)

apsosig.png
long live PCem

Reply 92 of 103, by rasz_pl

User metadata
Rank l33t
Rank
l33t
leileilol wrote on 2024-05-01, 21:22:

Also bear in mind, all of Quake3's transforms and lighting are entirely software calculated (regardless of hardware - this is by design)

Re: Slow 3D video card on super socket 7 shootout (sis 305, savage4, trident blade3d, riva tnt, etc)

even GLQuake uses glTranslatef(); glRotatef(); glScalef(); and afaik those are automagically TnL powered when possible.

AT&T Globalyst/FIC 486-GAC-2 Cache Module reproduction
Zenith Data Systems (ZDS) ZBIOS 'MFM-300 Monitor' reverse engineering

Reply 93 of 103, by leileilol

User metadata
Rank l33t++
Rank
l33t++
rasz_pl wrote on 2024-05-02, 02:17:

even GLQuake uses glTranslatef(); glRotatef(); glScalef(); and afaik those are automagically TnL powered when possible.

Of those functions, only glTranslate was used for drawing the sky.

apsosig.png
long live PCem

Reply 94 of 103, by DrAnthony

User metadata
Rank Newbie
Rank
Newbie
leileilol wrote on 2024-05-01, 21:22:

is there a 'hwtnl is a lie' going on here because please do tell that to the v5/intel/tnt2/savage/kyro gamers in late 2002 who can't play bf1942

I can't speak for everyone but my point is that fixed function T&L was more valuable as a marketing tool than as a hardware feature. That said, it ABSOLUTELY wasn't an evolutionary dead end like quadratic primitives, forward texture mapping, and other quirky mid 90's features. Also, just because the GeForce 256 and GeForce 2 line incorporated this feature doesn't mean they were bad or poorly designed cards. They were absolute monsters and stood on the merit of their real world performance, marketing gimmicks aside.

Reply 95 of 103, by hornet1990

User metadata
Rank Newbie
Rank
Newbie
rasz_pl wrote on 2024-05-02, 02:17:
leileilol wrote on 2024-05-01, 21:22:

Also bear in mind, all of Quake3's transforms and lighting are entirely software calculated (regardless of hardware - this is by design)

Re: Slow 3D video card on super socket 7 shootout (sis 305, savage4, trident blade3d, riva tnt, etc)

even GLQuake uses glTranslatef(); glRotatef(); glScalef(); and afaik those are automagically TnL powered when possible.

Nooo... those are matrix operations used to manipulate the currently set matrix (ModelView,Projection or Texture) in the GL context state. The closest you'll get to hardware acceleration there will be a SIMD optimized matrix multiplication using your CPU's SSE or 3DNow instructions. They create a matrix of the specified type using the arguments provided then multiply the existing matrix value by it, replacing the original value. All very much CPU side.

It's not until a glDraw* command is used that whatever the current state of those matricies are will either be used by the software TnL or be sent to the GPU for use by the hardware TnL engine to transform the vertices.

Reply 96 of 103, by leileilol

User metadata
Rank l33t++
Rank
l33t++
DrAnthony wrote on 2024-05-02, 19:22:

I can't speak for everyone but my point is that fixed function T&L was more valuable as a marketing tool than as a hardware feature.

You could say that for any big featured graphic advancement years ahead of the games that use them. It only got more gimmicky after DirectX 10, because how are you going to outdo shaders?

apsosig.png
long live PCem

Reply 97 of 103, by swaaye

User metadata
Rank l33t++
Rank
l33t++

Nvidia outmaneuvered everyone else. Though I preferred Matrox and ATI Radeon for the image quality. But Nvidia was making smart moves with the balance of acceptable quality and top performance for that halo performance mind share, along with the rapid rate of product iteration to attract more attention.

Wasn't Rampage originally a DirectX 7 product? 3dfx had big project management issues. They also wasted some manpower on Sega Dreamcast.

Reply 98 of 103, by DrAnthony

User metadata
Rank Newbie
Rank
Newbie
leileilol wrote on 2024-05-03, 01:10:
DrAnthony wrote on 2024-05-02, 19:22:

I can't speak for everyone but my point is that fixed function T&L was more valuable as a marketing tool than as a hardware feature.

You could say that for any big featured graphic advancement years ahead of the games that use them. It only got more gimmicky after DirectX 10, because how are you going to outdo shaders?

Please don't take this as being abrasive, but would you mind citing a few examples? The closest I can come up with would be the big antialiasing push 3dfx had with the Voodoo 5 series oddly enough. The cards just weren't fast enough to make it super worthwhile but it eventually lead to the industry accepting it as a standard feature in some form or another going on just as all that compute from a T&L engine made more sense when it was running shader programs. Other than that, the other big gimmicks have either been dead ends (3d and the like) or are still up in the air (ray tracing).

Reply 99 of 103, by leileilol

User metadata
Rank l33t++
Rank
l33t++

Supersampling the screen was something mutual with the others so it's not a 3dfx only idea (FSAA hype was off the charts before pixel shaders hit). 3dfx's gimmick that didn't go through was FXT1 texture compression though 😀

FSAA/SSAA itself was a bit gimmicky only because it was a slow, brute force approach and didn't play well with pixel buffer writes (like Direct3D games) and text so that's more generally superceded by MSAA.

3dfx Voodoo also had edge AA but only the splash animations used it, and some specific Tomb Raider version. Considering the drawbacks of edge AA, that's a gimmick right there!

In the same year of pixel shading there'd been geometry tesselation which didn't gain steam beyond the cards of that year.

apsosig.png
long live PCem