VOGONS


Geforce2 shading rasterizer info

Topic actions

First post, by 386SX

User metadata
Rank l33t
Rank
l33t

Hi,
I was asking myself if the Geforce2 NSR technology that I understand was or not some sort of first pixel shading technique, has ever been used specifically by some games before the real Pixel Shaders support as we remember from first games or benchmark.
Thank

Reply 1 of 36, by Scali

User metadata
Rank l33t
Rank
l33t

I think Doom3 is the closest thing to what you're looking for?
It has a specific GF2 path with OpenGL extensions, and it uses per-pixel lighting with its dot3 extension.

I have also implemented per-pixel Blinn-Phong lighting myself in D3D9 with the GF2's dot3 functionality.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 2 of 36, by 386SX

User metadata
Rank l33t
Rank
l33t

Thank! I have always thought that it should have been nice to see some more games supporting a technique that really could have changed pc graphic appeareance in a time where graphic didn't probably improved much. (just like enviromental mapping on the G400 before or other less-known proprietary technique).

Reply 3 of 36, by agent_x007

User metadata
Rank Oldbie
Rank
Oldbie

NSR (like ATI's Pixel Tapestry), to me is just an extension of hardware T&L unit (first used in GeForce 256).
It enables certain operations to be run on hardware (and per pixel).
Examples :
Shadow maps, bump mapping (EMBM, Dot Product 3, and embossed), shadow volumes, volumetric explosion, elevation maps, vertex blending, waves, refraction and specular lighting.
Source : http://www.anandtech.com/show/537/4

Basicly :
It's not a Pixel Shader in modern sense because U can't program it ("fixed function" of the chip).
That's why I called it T&L extension, and not a "Pixel Shader" thing.

PS. It didn't change how we see graphics, because it didn't have time - GeForce 3 was presented less than a year after GF2 GTS (and it can do full programmable PS and VS).

157143230295.png

Reply 4 of 36, by Scali

User metadata
Rank l33t
Rank
l33t
agent_x007 wrote:
NSR (like ATI's Pixel Tapestry), to me is just an extension of hardware T&L unit (first used in GeForce 256). It enables certain […]
Show full quote

NSR (like ATI's Pixel Tapestry), to me is just an extension of hardware T&L unit (first used in GeForce 256).
It enables certain operations to be run on hardware (and per pixel).
Examples :
Shadow maps, bump mapping (EMBM, Dot Product 3, and embossed), shadow volumes, volumetric explosion, elevation maps, vertex blending, waves, refraction and specular lighting.
Source : http://www.anandtech.com/show/537/4

Basicly :
It's not a Pixel Shader in modern sense because U can't program it ("fixed function" of the chip).
That's why I called it T&L extension, and not a "Pixel Shader" thing.

PS. It didn't change how we see graphics, because it didn't have time - GeForce 3 was presented less than a year after GF2 GTS (and it can do full programmable PS and VS).

I disagree with that somewhat.
Namely, the hardware T&L on the GeForce performs only vertex processing.
Back then, lighting was only calculated per-vertex, and interpolated across the triangle surface. Likewise, texture coordinates were interpolated between vertices.

What makes things like EMBM and dot3 special is that they evaluate a lighting/texturing function at every pixel. This allows more detailed lighting than just interpolating between vertices, hence the term per-pixel lighting.
So it is, by definition, a form of pixel-shading.

And whether or not it is programmable, can be debated.
The first generation of pixelshaders was very limited. You could write a simple script for a handful of operations on a pixel, which was still very much hardwired.
The GF2 can do more or less the same, because even the fixedfunction units on the GF2 have two texture stages (multitexturing), and you can set any texturing operation for either stage. There are also multiple input/output registers you can choose.
So it is indeed 'programmable' to a certain extent. You can write 'programs' of two instructions per pass. By performing multiple render passes (Doom3 uses 6 renderpasses on GF2), you can implement quite elaborate pixel-shading programs.

In my opinion, Pixel Shader 1.1 (GF3) is little more than an extension of what GF2 did. You got 4 texture stages instead of 2, and a few extra operations. But it was still very limited, and you couldn't do certain things.
Pixel Shader 1.4 is the first 'truly' programmable per-pixel shader, where you could actually do things such as calculating texture coordinates inside the shader, and then using them to look up a texture. This meant that you could do any kind of dependent reads on textures, and you could implement just about any kind of lighting algorithm you could think of, including things like parallax occlusion mapping and such.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 6 of 36, by 386SX

User metadata
Rank l33t
Rank
l33t
Scali wrote:
I disagree with that somewhat. Namely, the hardware T&L on the GeForce performs only vertex processing. Back then, lighting was […]
Show full quote
agent_x007 wrote:
NSR (like ATI's Pixel Tapestry), to me is just an extension of hardware T&L unit (first used in GeForce 256). It enables certain […]
Show full quote

NSR (like ATI's Pixel Tapestry), to me is just an extension of hardware T&L unit (first used in GeForce 256).
It enables certain operations to be run on hardware (and per pixel).
Examples :
Shadow maps, bump mapping (EMBM, Dot Product 3, and embossed), shadow volumes, volumetric explosion, elevation maps, vertex blending, waves, refraction and specular lighting.
Source : http://www.anandtech.com/show/537/4

Basicly :
It's not a Pixel Shader in modern sense because U can't program it ("fixed function" of the chip).
That's why I called it T&L extension, and not a "Pixel Shader" thing.

PS. It didn't change how we see graphics, because it didn't have time - GeForce 3 was presented less than a year after GF2 GTS (and it can do full programmable PS and VS).

I disagree with that somewhat.
Namely, the hardware T&L on the GeForce performs only vertex processing.
Back then, lighting was only calculated per-vertex, and interpolated across the triangle surface. Likewise, texture coordinates were interpolated between vertices.

What makes things like EMBM and dot3 special is that they evaluate a lighting/texturing function at every pixel. This allows more detailed lighting than just interpolating between vertices, hence the term per-pixel lighting.
So it is, by definition, a form of pixel-shading.

And whether or not it is programmable, can be debated.
The first generation of pixelshaders was very limited. You could write a simple script for a handful of operations on a pixel, which was still very much hardwired.
The GF2 can do more or less the same, because even the fixedfunction units on the GF2 have two texture stages (multitexturing), and you can set any texturing operation for either stage. There are also multiple input/output registers you can choose.
So it is indeed 'programmable' to a certain extent. You can write 'programs' of two instructions per pass. By performing multiple render passes (Doom3 uses 6 renderpasses on GF2), you can implement quite elaborate pixel-shading programs.

In my opinion, Pixel Shader 1.1 (GF3) is little more than an extension of what GF2 did. You got 4 texture stages instead of 2, and a few extra operations. But it was still very limited, and you couldn't do certain things.
Pixel Shader 1.4 is the first 'truly' programmable per-pixel shader, where you could actually do things such as calculating texture coordinates inside the shader, and then using them to look up a texture. This meant that you could do any kind of dependent reads on textures, and you could implement just about any kind of lighting algorithm you could think of, including things like parallax occlusion mapping and such.

I don't remember much but was also the PS 1.4 almost never used?

Reply 7 of 36, by Scali

User metadata
Rank l33t
Rank
l33t
386SX wrote:

But could have been the final result similar to what we usually saw with Pixel Shaders 1.0 games? (reflections, water, etc...)

Yes, you could do most tricks on GF2 as well. It supports render-to-texture and cubemaps, which can be used for various reflection/refraction tricks.
I made this thing on my GF2 back in the day: https://youtu.be/3myGIK-7d0E
It looks 'raytraced' with three coloured reflecting spheres and shadows (including self-shadowing).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 8 of 36, by Scali

User metadata
Rank l33t
Rank
l33t
386SX wrote:

I don't remember much but was also the PS 1.4 almost never used?

I think it is used by quite a few popular games from that era. Half-Life 2 uses ps1.4. It looks almost as good on a Radeon 8500 as it does on DX9 hardware, with very good water effect and all.
Doom3 also has a special path with the Radeon 8500 OpenGL shader extensions.
I think Far Cry also uses it, and probably others.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 9 of 36, by 386SX

User metadata
Rank l33t
Rank
l33t
Scali wrote:
I think it is used by quite a few popular games from that era. Half-Life 2 uses ps1.4. It looks almost as good on a Radeon 8500 […]
Show full quote
386SX wrote:

I don't remember much but was also the PS 1.4 almost never used?

I think it is used by quite a few popular games from that era. Half-Life 2 uses ps1.4. It looks almost as good on a Radeon 8500 as it does on DX9 hardware, with very good water effect and all.
Doom3 also has a special path with the Radeon 8500 OpenGL shader extensions.
I think Far Cry also uses it, and probably others.

Oh, I do remember Doom3 with the 8500, man I agree it was great! Also Half Life 2 was to my eyes one of the most "realistic" still light game on the rendering/effects side even if it didn't need all the overused bloom/"dreaming"/absurd effects of other games.
One thing I always thought was why on games didn't invested on using pixel shading for some realistic indirect global illumination and HDR (the real one you could see on 3dstudiomax etc..) and not all those excessive reflections effects from every kind of surfaces.

Reply 10 of 36, by agent_x007

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
I disagree with that somewhat. Namely, the hardware T&L on the GeForce performs only vertex processing. Back then, lighting was […]
Show full quote

I disagree with that somewhat.
Namely, the hardware T&L on the GeForce performs only vertex processing.
Back then, lighting was only calculated per-vertex, and interpolated across the triangle surface. Likewise, texture coordinates were interpolated between vertices.

What makes things like EMBM and dot3 special is that they evaluate a lighting/texturing function at every pixel. This allows more detailed lighting than just interpolating between vertices, hence the term per-pixel lighting.
So it is, by definition, a form of pixel-shading.

And whether or not it is programmable, can be debated.
The first generation of pixelshaders was very limited. You could write a simple script for a handful of operations on a pixel, which was still very much hardwired.
The GF2 can do more or less the same, because even the fixedfunction units on the GF2 have two texture stages (multitexturing), and you can set any texturing operation for either stage. There are also multiple input/output registers you can choose.
So it is indeed 'programmable' to a certain extent. You can write 'programs' of two instructions per pass. By performing multiple render passes (Doom3 uses 6 renderpasses on GF2), you can implement quite elaborate pixel-shading programs.

In my opinion, Pixel Shader 1.1 (GF3) is little more than an extension of what GF2 did. You got 4 texture stages instead of 2, and a few extra operations. But it was still very limited, and you couldn't do certain things.
Pixel Shader 1.4 is the first 'truly' programmable per-pixel shader, where you could actually do things such as calculating texture coordinates inside the shader, and then using them to look up a texture. This meant that you could do any kind of dependent reads on textures, and you could implement just about any kind of lighting algorithm you could think of, including things like parallax occlusion mapping and such.

So, U could do almost all effects U could on GF3 with GF2 GTS - OK, BUT what about the performance at the end ?
Can GF2 GTS (or other DX7 class card), handle that extended "programs", and multiple render passes fast enough, to render smooth picture in practice ?

Also, T&L stands for Transform and Lighting, so why a Lighting on per pixel basis can't be a extension of it ?

157143230295.png

Reply 11 of 36, by Scali

User metadata
Rank l33t
Rank
l33t
386SX wrote:

One thing I always thought was why on games didn't invested on using pixel shading for some realistic indirect global illumination and HDR (the real one you could see on 3dstudiomax etc..)

Well, quite simply because they are very complex lighting models.
Proper global illumination requires some kind of photon path-tracing approach. Only now with DX12-hardware do we have some way to accelerate this, with conservative rasterization into 3d volume textures, which can be used as sparse tree structures.
This is only available on nVidia (900-series) and Intel hardware (Skylake) currently, and probably only the high-end nVidia ones are fast enough to actually do something with it in realtime. Hopefully the coming generation of games will start using it.

HDR required extended range of colour information, which wasn't possible until DX9 hardware introduced floating point pixelshading and textures.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 12 of 36, by Scali

User metadata
Rank l33t
Rank
l33t
agent_x007 wrote:

Can GF2 GTS (or other DX7 class card), handle that extended "programs", and multiple render passes fast enough, to render smooth picture in practice ?

The GF2GTS and GF2Ultra were extremely fast cards at the time, and performance was quite similar to the GF3 models.
In Doom3, the 6-pass approach was acceptable up to a certain resolution (back then we still used 640x480 or 800x600).

agent_x007 wrote:

Also, T&L stands for Transform and Lighting, so why a Lighting on per pixel basis can't be a extension of it ?

Transforming and lighting happens per-vertex, as I already said.
Doing something per-pixel is different.
Per-vertex operations happen on the incoming geometry data.
The result of T&L is passed to the rasterizer, the attributes are set up for interpolation, and then the rasterizer will send each pixel through pixel shading.
So they are different parts of the pipeline, and work on different types of data.
Especially in those early generations of hardware, there was a huge difference. VS1.1 already was floating-point, and had quite a powerful instructionset, where PS1.1 was very limited integer-stuff only. It wasn't until DX10 that pixel shaders became as powerful as vertex shaders (unified shader model).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 13 of 36, by 386SX

User metadata
Rank l33t
Rank
l33t
Scali wrote:
Well, quite simply because they are very complex lighting models. Proper global illumination requires some kind of photon path-t […]
Show full quote
386SX wrote:

One thing I always thought was why on games didn't invested on using pixel shading for some realistic indirect global illumination and HDR (the real one you could see on 3dstudiomax etc..)

Well, quite simply because they are very complex lighting models.
Proper global illumination requires some kind of photon path-tracing approach. Only now with DX12-hardware do we have some way to accelerate this, with conservative rasterization into 3d volume textures, which can be used as sparse tree structures.
This is only available on nVidia (900-series) and Intel hardware (Skylake) currently, and probably only the high-end nVidia ones are fast enough to actually do something with it in realtime. Hopefully the coming generation of games will start using it.

HDR required extended range of colour information, which wasn't possible until DX9 hardware introduced floating point pixelshading and textures.

😀 Nice, happy to see that at the end we will see this in games. I always thought that these are the things that could really make a game "realistic" to the human eyes before "everything" else, at least as I remember when using 3dstudio for some tests even with very simple enviroments (a blank flat room with a windows, a single object and external light source).
At that time Dx7 card couldn't do it but HL2 didn't in some ways utilized precalculated illumination that "appeared" similar to global illumination? (I remember very nice realistic rendering on the "prison stage")?

Reply 14 of 36, by Scali

User metadata
Rank l33t
Rank
l33t

Yes, various games use precalced lighting/shadowing/reflections.
Half-Life 2 had an interesting technique where there were 'light probes' at various places in the level. For each light probe, they would precalc a cubemap with a 'snapshot' of light information at that spot. As you moved through the level, it would grab data from the different probes, which was a decent approximation of global illumination for the time.

The interesting part about Half-Life 2 was that the shading of the game was designed to run in a single pass. This allowed the game to reach very high framerates.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 15 of 36, by leileilol

User metadata
Rank l33t++
Rank
l33t++

I used to be always curious on how the water refraction in the Geforce2 cave tech demo were done, how the image was rendered, and mapped etc, and why I never saw anything in games like that on that card.

Q3A had lightgrids, which were like lightprobes, but just a single direction of light color + ambient, on fixed grids, calculated in software. With q3map2's -bouncegrid you can sort of approximate super fake global illumination as well

apsosig.png
long live PCem

Reply 16 of 36, by Scali

User metadata
Rank l33t
Rank
l33t
leileilol wrote:

I used to be always curious on how the water refraction in the Geforce2 cave tech demo were done, how the image was rendered, and mapped etc, and why I never saw anything in games like that on that card.

This one?
https://www.youtube.com/watch?v=G4zw_qMU5OA

That looks like it's just per-vertex perturbation. The water mesh is just animated on the CPU (simple verlet integration of a 2d heightmap). It's also done in 3DMark2000 and XLR8.

I think I've seen more interesting things though. Like NVidia's vertex program for refraction with Fresnel.
See the 'JRefract' here, which is based on that old demo: http://jogamp.org/jogl-demos/www/
On a GF2 the vertex program would run in software on the CPU, but it can perform the effect in realtime.

This again uses per-vertex trickery. Refraction can be faked in a similar way to reflection: with a cubemap to do environmental lookups. You can cheat by just rendering the frontfaces, it is convincing enough.
Here is an article on that: http://http.developer.nvidia.com/CgTutorial/c … _chapter07.html
If you want to go for more correct, you could render the refraction with the backfaces into a new cubemap, and then render the frontfaces based on that one.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 17 of 36, by 386SX

User metadata
Rank l33t
Rank
l33t
Scali wrote:

Yes, various games use precalced lighting/shadowing/reflections.
Half-Life 2 had an interesting technique where there were 'light probes' at various places in the level. For each light probe, they would precalc a cubemap with a 'snapshot' of light information at that spot. As you moved through the level, it would grab data from the different probes, which was a decent approximation of global illumination for the time.

The interesting part about Half-Life 2 was that the shading of the game was designed to run in a single pass. This allowed the game to reach very high framerates.

Interesting! I nowdays still consider it more realistic than (many) pure pixel shaded games that came later.

Regarding the water simulation of the light, well -for its time- 3dmark00 one was indeed quiet good with both animation and reflections. Ok 3dmark01-pixel shaded one was MUCH better but I didn't like the animation (but the whole Nature demo part was incredible at that time and quiet heavy).

Reply 18 of 36, by Davros

User metadata
Rank l33t
Rank
l33t

Have you read this :

Attachments

  • Filename
    Nsr.pdf
    File size
    247.71 KiB
    Downloads
    81 downloads
    File license
    Fair use/fair dealing exception

Guardian of the Sacred Five Terabyte's of Gaming Goodness

Reply 19 of 36, by Scali

User metadata
Rank l33t
Rank
l33t
Davros wrote:

Have you read this :

Haha yes, many moons ago.
Brings back memories.
The GeForce2 GTS was the first card on which I did serious 3d acceleration. I wrote my DirectX 8 engine on that, and the engine I'm using today is still based on those foundations.
It had been updated to DX9, and later I merged it with my DX10/11 codebase into a single engine with some #ifdef magic and some wrapper classes to abstract away the differences.
The engine still supports fixedfunction shading on the GF2GTS, and in my software there is an unsupported fallback path for fixedfunction and software vertex shaders. So you can actually run it on a GF2GTS, even though it is not officially supported 😀 I just thought it was cool to put that in there.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/