agent_x007 wrote:NSR (like ATI's Pixel Tapestry), to me is just an extension of hardware T&L unit (first used in GeForce 256).
It enables certain […]
Show full quote
NSR (like ATI's Pixel Tapestry), to me is just an extension of hardware T&L unit (first used in GeForce 256).
It enables certain operations to be run on hardware (and per pixel).
Examples :
Shadow maps, bump mapping (EMBM, Dot Product 3, and embossed), shadow volumes, volumetric explosion, elevation maps, vertex blending, waves, refraction and specular lighting.
Source : http://www.anandtech.com/show/537/4
Basicly :
It's not a Pixel Shader in modern sense because U can't program it ("fixed function" of the chip).
That's why I called it T&L extension, and not a "Pixel Shader" thing.
PS. It didn't change how we see graphics, because it didn't have time - GeForce 3 was presented less than a year after GF2 GTS (and it can do full programmable PS and VS).
I disagree with that somewhat.
Namely, the hardware T&L on the GeForce performs only vertex processing.
Back then, lighting was only calculated per-vertex, and interpolated across the triangle surface. Likewise, texture coordinates were interpolated between vertices.
What makes things like EMBM and dot3 special is that they evaluate a lighting/texturing function at every pixel. This allows more detailed lighting than just interpolating between vertices, hence the term per-pixel lighting.
So it is, by definition, a form of pixel-shading.
And whether or not it is programmable, can be debated.
The first generation of pixelshaders was very limited. You could write a simple script for a handful of operations on a pixel, which was still very much hardwired.
The GF2 can do more or less the same, because even the fixedfunction units on the GF2 have two texture stages (multitexturing), and you can set any texturing operation for either stage. There are also multiple input/output registers you can choose.
So it is indeed 'programmable' to a certain extent. You can write 'programs' of two instructions per pass. By performing multiple render passes (Doom3 uses 6 renderpasses on GF2), you can implement quite elaborate pixel-shading programs.
In my opinion, Pixel Shader 1.1 (GF3) is little more than an extension of what GF2 did. You got 4 texture stages instead of 2, and a few extra operations. But it was still very limited, and you couldn't do certain things.
Pixel Shader 1.4 is the first 'truly' programmable per-pixel shader, where you could actually do things such as calculating texture coordinates inside the shader, and then using them to look up a texture. This meant that you could do any kind of dependent reads on textures, and you could implement just about any kind of lighting algorithm you could think of, including things like parallax occlusion mapping and such.