First post, by appiah4
- Rank
- l33t++
wrote:Probably a question of cause and effect. NV probably pioneered hardware T&L, and proposed to have it supported by the DX7 standa […]
wrote:Maybe these companies didn't expect something like the Geforce chip could be released but from that point both 3dfx and the others should have immediately followed the newer Directx specifications just like NV did, maybe with the old memories of the NV1 chip vs Directx.
Why to stay on a Dx6 design when others already released refreshed Directx7 chips and with more features (example NSR for the Geforce2).Probably a question of cause and effect.
NV probably pioneered hardware T&L, and proposed to have it supported by the DX7 standard, while they may already have had prototypes working in the lab.
Other manufacturers focused on other things, and had to start from scratch on T&L, so they would be behind the curve here.
You see the same with virtually every version of DX... one company seems to get it right out of the gate, the other is struggling to keep up.
For example:
DX7: GF256
DX8: GF3
DX9: Radeon 9700
DX10: GeForce 8800 (going beyond that even, also pioneering GPGPU for OpenCL and DirectCompute).DX11 and DX12 aren't as clear-cut. With DX11, the Radeon 5xxx was first, but NV ran into a lot of trouble with their GF4xx. Once they sorted it out in the 5xx series, they had excellent DX11 cards as well.
DX12 is more of an API update than actually a feature update. So cards with DX12 support were already on the market when the new DX12 launched. Ironically enough, the Intel GPUs are the most feature-complete DX12 GPUs on the market.
Well, to be fair ATI 8500 was the superior card to the GF3 in terms of GF3 and basically trumped it, so to say nVidia 'got the DX8 era right' is wrong.
Also, with DX12, although feature completion is not the issue, performance gains and better leveraging of the API is AMD's forte here.
Same goes for Vulkan.