VOGONS

Common searches


Modern video drivers and APIs

Topic actions

First post, by appiah4

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
Probably a question of cause and effect. NV probably pioneered hardware T&L, and proposed to have it supported by the DX7 standa […]
Show full quote
386SX wrote:

Maybe these companies didn't expect something like the Geforce chip could be released but from that point both 3dfx and the others should have immediately followed the newer Directx specifications just like NV did, maybe with the old memories of the NV1 chip vs Directx.
Why to stay on a Dx6 design when others already released refreshed Directx7 chips and with more features (example NSR for the Geforce2).

Probably a question of cause and effect.
NV probably pioneered hardware T&L, and proposed to have it supported by the DX7 standard, while they may already have had prototypes working in the lab.
Other manufacturers focused on other things, and had to start from scratch on T&L, so they would be behind the curve here.
You see the same with virtually every version of DX... one company seems to get it right out of the gate, the other is struggling to keep up.
For example:
DX7: GF256
DX8: GF3
DX9: Radeon 9700
DX10: GeForce 8800 (going beyond that even, also pioneering GPGPU for OpenCL and DirectCompute).

DX11 and DX12 aren't as clear-cut. With DX11, the Radeon 5xxx was first, but NV ran into a lot of trouble with their GF4xx. Once they sorted it out in the 5xx series, they had excellent DX11 cards as well.
DX12 is more of an API update than actually a feature update. So cards with DX12 support were already on the market when the new DX12 launched. Ironically enough, the Intel GPUs are the most feature-complete DX12 GPUs on the market.

Well, to be fair ATI 8500 was the superior card to the GF3 in terms of GF3 and basically trumped it, so to say nVidia 'got the DX8 era right' is wrong.

Also, with DX12, although feature completion is not the issue, performance gains and better leveraging of the API is AMD's forte here.

Same goes for Vulkan.

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 1 of 24, by Scali

User metadata
Rank l33t
Rank
l33t
appiah4 wrote:

Well, to be fair ATI 8500 was the superior card to the GF3 in terms of GF3 and basically trumped it, so to say nVidia 'got the DX8 era right' is wrong.

The Radeon 8500 came out months later than the GF3, and only a few months before the GF4.
If anything, GF3 simply trumped the 8500 by being available as the first DX8 card.
And the GF4 redefined the DX8 performance benchmark, making the 8500 obsolete very quickly.
To me that makes it pretty obvious that NV got the DX8 era right.
ATi was too late to the party, and never had an answer to the GF4. They also suffered from driver issues, and were caught cheating (quack.exe), so that didn't really help the image of the 8500 either.

appiah4 wrote:

Also, with DX12, although feature completion is not the issue, performance gains and better leveraging of the API is AMD's forte here.

Same goes for Vulkan.

If you're looking at it from the wrong angle, yes.
If you're looking at it from the right angle, you'll see that AMD had trouble making efficient implementations for legacy APIs like OpenGL and DX11. DX12 and Vulkan put a lot of the responsibility on the application rather than the driver. So it filters out the bottlenecks that were normally in the driver (for example, AMD never got the deferred contexts in DX11 working, there was no performance benefit whatsoever, unlike on NV).
These benchmarks make that quite obvious:
http://www.pcgamer.com/doom-benchmarks-return … an-vs-opengl/2/
It's not so much that AMD is so super-fast in Vulkan, it's more that their OpenGL performance is so much lower than NV's.
Their fastest card only does 119 fps in 1920x1080 in OpenGL. NV gets 162 fps out of their fastest card with OpenGL (which ironically enough is faster than AMD's best Vulkan score).
Even a 980 does more than 119 fps in OpenGL. Clearly that's a driver issue on AMD's side (normally you'd expect the Fury X closer to the 980Ti).

Even so, you see that AMD GPUs are still considerably less efficient in DX12 and Vulkan than NV GPUs are. They need larger GPUs with more transistors, higher bandwidth memory, using more power, and can barely reach GF1080 performance that way.
In the case of DOOM, you're even comparing apples-to-oranges, since AMD uses special shader extensions, where NV runs a vanilla Vulkan shader path.

So while it might not be easy to tell whether a specific GPU was the main inspiration for DX12 and/or Vulkan, it's quite clear that NV's current GPUs are way ahead of the competition.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 2 of 24, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++

Console developers have long used lower level graphics APIs (except XBox, of course) to extract maximum performance. Easier development was also one of the upsides in developing Mantle (which prompted D3D12 and Vulkan). It wasn't all about implementation deficiencies in Radeon software.

All hail the Great Capacitor Brand Finder

Reply 3 of 24, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

(except XBox, of course)

Not "Except Xbox", Xbox too had low-level APIs.
https://blogs.msdn.microsoft.com/directx/2014 … /20/directx-12/

Under the hood, Forza achieves this by using the efficient low-level APIs already available on Xbox One today.  Traditionally this level of efficiency was only available on console – now, Direct3D 12, even in an alpha state, brings this efficiency to PC and Phone as well.  By porting their Xbox One Direct3D 11.X core rendering engine to use Direct3D 12 on PC, Turn 10 was able to bring that console-level efficiency to their PC tech demo.

DX 11.'X' is their special Xbox One version with low-level extensions. Earlier Xbox models had similar APIs, differing from the PC version of DX.

MS also specifically pointed out not to expect a performance boost from DX12 on Xbox One.
So basically Xbox always shipped with a Mantle-like API, as did the PS4. Which makes one wonder why AMD tried to make all this fuss about Mantle in the first place. They were basically just copying what they saw MS and Sony doing, and tried to spin it like Mantle was the actual API used on consoles.

gdjacobs wrote:

Easier development was also one of the upsides in developing Mantle

These new APIs are actually considerably more difficult to use, and require a lot more code and administration.
What's worse, such low-level optimizations may no longer work for all GPUs.
I wouldn't be surprised if a few years from now, with newer GPU generations, that some early games with both DX11/DX12 or OpenGL/Vulkan backends would run faster in the 'legacy' API mode, because then the driver uses a GPU-optimized path, which may be more efficient than the generic solutions built into the game engine for the new APIs.

gdjacobs wrote:

Mantle (which prompted D3D12 and Vulkan).

DX12 was already in development before Mantle.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 4 of 24, by appiah4

User metadata
Rank l33t++
Rank
l33t++

We had this discussion before Scali I dont intend to go down that rabbit hole again. Suffice to say nV approach to writing DX11 drivers (ie replacing game shaders with their own) is not an example of good driver optimization its just an nvidia tactic of pushing the developers to lazy and poor coding then leveraging their own assets to make lazy developer coding a barrier to competitors.

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 5 of 24, by Scali

User metadata
Rank l33t
Rank
l33t
appiah4 wrote:

Suffice to say nV approach to writing DX11 drivers (ie replacing game shaders with their own) is not an example of good driver optimization its just an nvidia tactic of pushing the developers to lazy and poor coding then leveraging their own assets to make lazy developer coding a barrier to competitors.

Oh really... As if my example of implementing deferred contexts in DX11 efficiently has anything to do with shader replacement. Clearly there's more to NV's drivers than just shader replacement (as if AMD doesn't do exactly the same).
Heck, I write D3D applications myself. I can be 100% sure that NV and AMD don't have any kind of driver hacks for my specific software, because they've never even seen it yet.
And in this 'virgin' scenario, you still see the exact same things: NV drivers just do some things better than AMD drivers.
This may also be a result of GPU design in part (you can't really tell whether the extra CPU cycles are spent on more work having to be done to get the GPU to perform a certain operation, or that they are 'idle' CPU cycles, waiting for the GPU to finish).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 6 of 24, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
appiah4 wrote:

Suffice to say nV approach to writing DX11 drivers (ie replacing game shaders with their own) is not an example of good driver optimization its just an nvidia tactic

You do realise this has been going on for the last 15+ years? It's called a 'vendor specific extension' o.0

appiah4 wrote:

pushing the developers to lazy and poor coding then leveraging their own assets to make lazy developer coding a barrier to competitors.

WTF does this even mean?

four letters...AZDO! These slides may help you to understand whats actually going on here..
https://www.khronos.org/assets/uploads/develo … y-GDC-Mar14.pdf

Note the repeated use of the phrase

khronos wrote:

On driver limited cases, obviously

Reply 7 of 24, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++

Regarding DirectX on the XBox, right you are.

Scali wrote:
These new APIs are actually considerably more difficult to use, and require a lot more code and administration. What's worse, su […]
Show full quote
gdjacobs wrote:

Easier development was also one of the upsides in developing Mantle

These new APIs are actually considerably more difficult to use, and require a lot more code and administration.
What's worse, such low-level optimizations may no longer work for all GPUs.
I wouldn't be surprised if a few years from now, with newer GPU generations, that some early games with both DX11/DX12 or OpenGL/Vulkan backends would run faster in the 'legacy' API mode, because then the driver uses a GPU-optimized path, which may be more efficient than the generic solutions built into the game engine for the new APIs.

Agreed, they offer more flexibility but more pitfalls as well. It does make porting console titles back to PC substantially easier along with the move away from more exotic console hardware. Much more commonality in the graphics subsystem, although significant resources would still have to go into code path optimization.

Don't get me wrong. OpenGL 3.x and 4.x was comparatively weak with the proprietary Radeon drivers, so Vulkan was certainly a boost for AMD. Interestingly, the hardware appears to be quite a bit more capable using a different driver codebase.
http://www.phoronix.com/scan.php?page=article … u-1730-radeonsi

Scali wrote:
gdjacobs wrote:

Mantle (which prompted D3D12 and Vulkan).

DX12 was already in development before Mantle.

Indeed, although I have a feeling AMD was trying to steer the wagon a bit with Mantle.

All hail the Great Capacitor Brand Finder

Reply 8 of 24, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

It does make porting console titles back to PC substantially easier along with the move away from more exotic console hardware. Much more commonality in the graphics subsystem, although significant resources would still have to go into code path optimization.

In the case of DX12 yes, that's basically the point.
The PS4 still has its entirely unique API, so you'd have to do a complete rewrite of all the graphics.

gdjacobs wrote:

Indeed, although I have a feeling AMD was trying to steer the wagon a bit with Mantle.

It seems more like they wanted to make a proprietary 'DX12-lite' as a marketing vehicle, knowing that DX12 would not be released until Windows 10, which would be very late to try and monetize their position as APU supplier for consoles.
It was basically 100% marketing, as I said all along. Mantle never got out of closed beta, so only a handful of developers actually ever used Mantle. AMD also never added support for all the GPUs they claimed. Only a select few GPUs have support.
And although AMD made huge claims about games supporting Mantle, only a handful ever did.
As soon as DX12 and Vulkan surfaced, AMD dropped Mantle like a bad habit.

Now look at CUDA and how NV handled that. They didn't just use it as a marketing tool for GPGPU until vendor-independent standards like OpenCL and DirectCompute surfaced.
They're still supporting it today as the API to get full access to the latest GPGPU features, and it is actually a far more mature programming environment than OpenCL and DirectCompute are.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 9 of 24, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

As soon as DX12 and Vulkan surfaced, AMD dropped Mantle like a bad habit.

Before! Vulkan IS Mantle 2.0. 'Vulkan' as in Volcano (associated with Mantle). OpenGL was getting really dated and nVidia were the ones making good OpenGL implementations. They pushed it, what they were doing under the hood was probably what Vulkan became. They were going that way for a while with GPGPU, ATi have always been quite good with the ideas but they don't seem to always materialise (they pushed for shader model in ARB originally, probably where MS got the idea since they were on the ARB too, then left and locked in Direct3D to windows, only supporting GL 1.2 from that point on). So when AMD realised they didn't stand a chance, they donated it to Khronos and here we are now with Vulkan.... which nVidia are still dominating with. o.0

Reply 10 of 24, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:

(they pushed for shader model in ARB originally, probably where MS got the idea since they were on the ARB too, then left and locked in Direct3D to windows, only supporting GL 1.2 from that point on).

I'm pretty sure MS was first with that.
They worked with NV to develop HLSL and Cg. Probably an extension of the work that they did to get DX8 + GF3 developed together.
ATi didn't have anything remotely like HLSL/Cg in their shader extensions for OpenGL for the Radeon 8500. They just had a very primitive thing, more like fixed-function D3D texture stages:
https://www.khronos.org/registry/OpenGL/exten … ment_shader.txt
You would build up a shader with successive calls.

For some reason, Cg didn't get adopted by OpenGL, and they re-invented the wheel (poorly) with GLSL.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 11 of 24, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
I'm pretty sure MS was first with that. They worked with NV to develop HLSL and Cg. Probably an extension of the work that they […]
Show full quote
spiroyster wrote:

(they pushed for shader model in ARB originally, probably where MS got the idea since they were on the ARB too, then left and locked in Direct3D to windows, only supporting GL 1.2 from that point on).

I'm pretty sure MS was first with that.
They worked with NV to develop HLSL and Cg. Probably an extension of the work that they did to get DX8 + GF3 developed together.
ATi didn't have anything remotely like HLSL/Cg in their shader extensions for OpenGL for the Radeon 8500. They just had a very primitive thing, more like fixed-function D3D texture stages:
https://www.khronos.org/registry/OpenGL/exten … ment_shader.txt
You would build up a shader with successive calls.

For some reason, Cg didn't get adopted by OpenGL, and they re-invented the wheel (poorly) with GLSL.

arp you could be right. Certainly D3D shaders were looked upon slightly envious by GLians like myself back then 🙁..not because of their existence though, more their flexability (the date on that link suggests the extension should be called GL_AMD_fragment_shader 🤣). I remeber using RenderMonkey in 03/04. GLSL was part of GL2.0 which had vague flutters of maturity at that point. I didn't use ARB shaders which were part of GL1.5 (about 2001/02? maybe earlier). Cg was much later I think, and an nVidia attempt to unify/standardise both HLSL and GLSL into a single syntax. Until GPGPU became a thing, (other than for deferred) it was only games that really required shaders in GL land.

Reply 12 of 24, by appiah4

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
The Radeon 8500 came out months later than the GF3, and only a few months before the GF4. If anything, GF3 simply trumped the 85 […]
Show full quote
appiah4 wrote:

Well, to be fair ATI 8500 was the superior card to the GF3 in terms of GF3 and basically trumped it, so to say nVidia 'got the DX8 era right' is wrong.

The Radeon 8500 came out months later than the GF3, and only a few months before the GF4.
If anything, GF3 simply trumped the 8500 by being available as the first DX8 card.
And the GF4 redefined the DX8 performance benchmark, making the 8500 obsolete very quickly.
To me that makes it pretty obvious that NV got the DX8 era right.
ATi was too late to the party, and never had an answer to the GF4. They also suffered from driver issues, and were caught cheating (quack.exe), so that didn't really help the image of the 8500 either.

Radeon 8500 came months later but was on par with the Ti4200 that also came out literally almost a whole year later, so what exactly is your point? ATi did it better than the GeForce did, you can see this simply by looking at how well the R200 scales compared to the GeForce 3 with better hardware in games that use programmable shaders.

As for not having an answer to the GF4? What kind of fantasy world do you live in? GF4 Ti series launched in April 2002, Amd released the R300 (9700PRO) in August of that year and basically not only kicked GF4's ass but collective asses of the whole FX line before they were even released. If anything GF4 was a late to the market product designed around a by-then obsolete DX level, aimed at competing with the Radeon 8500, a one year old card, that was basically thrashing their inferior NV20 GPU.

For clarity:

GeForce 3 Launched: March 2001
Radeon 8500 Launched: August 2001
GeForce 4 Launched: April 2002
Radoen 9700 Launched: August 2002

In both cases ATI launched only 4-5 months later and in both cases roflstomped the nV cards.

And it's also funny how you can say GF3 is the clear winner of the DX8 era by virtue of being first to market, while somehow AMD being first to market with DX11 and kicking nVidia's ass with the HD5000 series results in a 'not clear' winner because, basically, reasons.

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 13 of 24, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:

Cg was much later I think, and an nVidia attempt to unify/standardise both HLSL and GLSL into a single syntax.

No, MS and NV worked together on HLSL for DX9. NV made the project bigger, and made Cg, which could also be retrofitted to DX8 and OpenGL ARB shaders (I believe there was a beta version of HLSL available for DX8 as well). Cg is basically a HLSL compiler that can output 'legacy' shaders (so it was GPU-independent, although it could compile with NV-specific extensions as well).
Cg came out at about the same time as HLSL and DX9, long before GLSL arrived.
What's worse, GLSL required SM2.0 as a minimum, where HLSL and Cg can be used on anything from SM1.1 up (so all shader hardware from GF3 on).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 14 of 24, by Scali

User metadata
Rank l33t
Rank
l33t
appiah4 wrote:

Radeon 8500 came months later but was on par with the Ti4200

In what universe was that?
The Radeon 8500 I bought got totally creamed by the Ti4200 in most games, and even by the GF3 initially:
http://www.anandtech.com/show/836/9
http://www.anandtech.com/show/899/5
It wasn't until years later when the Catalyst drivers were finally mature enough, that the Radeon 8500 could show its full potential... which still wasn't quite Ti4200 performance, but close enough (and then NV still offered the Ti4400 and Ti4600 above that, completely out of reach of any DX8 GPU from ATi).
Also, GF4 was released in February 2002, not April: http://www.anandtech.com/show/875
Whereas Radeon 8500 was October 2001, not August (see review above).

Facts, they are important. Radeon 8500 isn't as good as you say it is.

appiah4 wrote:

As for not having an answer to the GF4? What kind of fantasy world do you live in? GF4 Ti series launched in April 2002, Amd released the R300 (9700PRO) in August of that year

You might notice that the 9700 is not a DX8 card, but a DX9 card, and that I actually listed the 9700 as the definitive DX9 card in my original post.

appiah4 wrote:

And it's also funny how you can say GF3 is the clear winner of the DX8 era by virtue of being first to market, while somehow AMD being first to market with DX11 and kicking nVidia's ass with the HD5000 series results in a 'not clear' winner because, basically, reasons.

Context is important here.
I didn't say 'clear winner' in the original context. I said that in many cases there is an obvious GPU design that drove a particular API revision.
With DX11 it isn't that clear-cut which GPU design would have driven the DX11 API revision.
This isn't even about performance.
DX11 is mostly an incremental update over DX10 anyway, mainly adding the following features:
- DirectCompute
- Deferred context
- Tessellation

DirectCompute was already available on GeForce 8800, so certainly not a new thing driven by HD5000.
Deferred context, as mentioned, is one of the things that AMD never got working right, so again, probably not driven by HD5000.
Tessellation, while ATi/AMD had tessellation before, it turned out to actually be one of the weak points of the HD5000 (and pretty much every future AMD GPU to this day).
The GeForce 4xx on the other hand had extremely good tessellation.
That's what makes it difficult to call. HD5000 was first, but it struggled with some of the new API features.
And in fact, AMD actually abandoned the VLIW-based design of the HD5000 for a scalar design, more closely resembling the GeForce GPUs since the 8800, because they better suit compute shaders and advanced graphics tasks.

Last edited by Scali on 2017-08-22, 17:41. Edited 3 times in total.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 15 of 24, by vladstamate

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

No, MS and NV worked together on HLSL for DX9. NV made the project bigger, and made Cg, which could also be retrofitted to DX8 and OpenGL ARB shaders (I believe there was a beta version of HLSL available for DX8 as well). Cg is basically a HLSL compiler that can output 'legacy' shaders (so it was GPU-independent, although it could compile with NV-specific extensions as well).
Cg came out at about the same time as HLSL and DX9, long before GLSL arrived.
What's worse, GLSL required SM2.0 as a minimum, where HLSL and Cg can be used on anything from SM1.1 up (so all shader hardware from GF3 on).

Indeed. Cg was used as the shading language for PS3 which had an NV47 (non-unified pixel/vertex shader units). Oh man, if only you knew the GPU that Sony was about to use before they decided to go with NV47 for PS3 😀 Lets just say NV47/RSX was a good decision 😀

YouTube channel: https://www.youtube.com/channel/UC7HbC_nq8t1S9l7qGYL0mTA
Collection: http://www.digiloguemuseum.com/index.html
Emulator: https://sites.google.com/site/capex86/
Raytracer: https://sites.google.com/site/opaqueraytracer/

Reply 16 of 24, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie

I've ordered that Cg book for a little foray into it. As a previous FX5200 owner, can't say I have fond memoirs of that fscking faery 😀. MS and ARB weren't exactly 'mates' at the time Cg was released then, so if they worked on it with NV, I can perhaps understand the hesitation to not adopt from ARB's point of view. I'm guessing NV didn't want to donate it for 'industry forging TM'. o.0

Just been reading up on 3DLabs attempted coup of the API circa GL2.0. I still remember the media blackout around GL3 when that came out and all the who-har of that fiasco, I wonder if 3DLabs got their way with GL2.0 what might have been. 😲

vladstamate wrote:

...the GPU that Sony was about to use before...

The suspense is killing me... what was it?

Reply 17 of 24, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:

I've ordered that Cg book for a little foray into it. As a previous FX5200 owner, can't say I have fond memoirs of that fscking faery 😀. MS and ARB weren't exactly 'mates' at the time Cg was released then, so if they worked on it with NV, I can perhaps understand the hesitation to not adopt from ARB's point of view. I'm guessing NV didn't want to donate it for 'industry forging TM'. o.0

Thing is, the ARB didn't have to do anything.
Cg was just a standalone compiler, which was free to download for everyone. It had an option to output standard ARB vertex and fragment programs, so all the required support was already in OpenGL.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 18 of 24, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

Thing is, the ARB didn't have to do anything.
Cg was just a standalone compiler, which was free to download for everyone. It had an option to output standard ARB vertex and fragment programs, so all the required support was already in OpenGL.

From a technical point of view perhaps. From a political point of view they had to all agree to it though and very few liked MS in that club. MS had only recently effectively snubbed the ARB by leaving it only a year or two before [EDIT: Nope I got that wrong they left in 2003 so were still there]. The other hard hitters around the table were:

SGI (not a fan of MS due to recent failed Fahrenheit API collaboration with them)
3dLabs (who obviously had their own agenda for GL2.0 anyhows)
ATi (who were... well ATi'ing it, and maybe didn't want to get involved supporting a direct competitors toolkit)
Apple (doing their own thing with OpenGL stack coming into fruition in the next few years, and that was always a few versions behind standard iirc).
HP (also recently burned by MS I think, with Fahrenheit, but not burned as badly as SGI)
SUN/ES/Intergraph had interests in other *nix systems. CAD stuff, which (aside from performance gains, and in some cases vertex shaders) didn't really need what shaders provided... bigger raw vertex/texture through-put is what they were perhaps more concerned with at the time. The pipe was fine for them, just needed a larger radius so that CAD developers could be even lazier and just throw the entire scene at GL and expect decent FPS... har har.

Kinda only leaves IBM, Intel and NV themselves. [EDIT: And MS 😉]

idk, I would love to have been a fly on the wall for these meetings o.0. I'm now looking forward to having a go with Cg 🤣. Have you ever used it?

Hindsight is a glorious thing. It's funny how many other ARB members at the time had every reason to hate MS in that room, however outside of ARB an awful lot of them had little collobraions with MS going on. 😀

Reply 19 of 24, by Scali

User metadata
Rank l33t
Rank
l33t

Yea, the history of OpenGL is interesting... also within Microsoft itself. I recommend you read this blog:
http://web.archive.org/web/20170612154308/htt … on-of-direct3d/
It explains how MS didn't see OpenGL and Direct3D as mutually exclusive (and the Fahrenheit API more or less supports that).
It's more that OpenGL was initially aimed at high-end workstation graphics systems. And although MS initially attempted to expand OpenGL to be more low-end friendly, it wasn't very successful. That's why they decided that a separate API was required, specifically targeted to low-end consumer accelerators and games.
Of course the quick evolution of 3d accelerators meant that they caught up with the 'high end workstation' hardware in just a few years, and the main reason why OpenGL couldn't be used for games/consumers evaporated.

I've never used Cg myself, only the 'vanilla' HLSL of Direct3D.
For OpenGL I've used both ARB programs and GLSL.
I did want to look at using Cg, because I was trying to develop a 3D engine that could run on both Direct3D and OpenGL. The main problem there was the different shader languages. But then Cg got abandoned by NV, so I figured it would be a dead end. On non-NV hardware you would be limited to SM2.0 I believe.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/