VOGONS


Are GeForce 256 DDR cards that rare?

Topic actions

Reply 140 of 311, by appiah4

User metadata
Rank l33t++
Rank
l33t++

Even if the framerate was the same the fact that GF2 cant show DX8 class shaders and suffers immense IQ difference is by itself alone to disqualify it for DX8 games. FPS isnt everything. Gf2 cant show DX8 ganes as they are meant to be.

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 141 of 311, by Scali

User metadata
Rank l33t
Rank
l33t
Reputator wrote:

I actually compared the two when I looked at 4x2 NVIDIA architectures, and in most tests the NV20 architecture did nearly double the NV15.

But you mostly compared games that could leverage the shaders and extra multitexturing that the GF3 has to offer, by the looks of things (Doom 3, Half-Life 2...).
That's not how the GF3 was originally received (which is why I picked a benchmark from the introduction of the GF3).
Most games were still DX7 and the GF2Ultra did very well in that, sometimes better than the GF3.
By the time there were games that would significantly benefit from the GF3 architecture, the GF4 was already on the market, which had rendered the GF3 irrelevant.
The GF3 was mostly a 'transitional' product. In a way it's very similar to the GeForce256: it offered the functionality of a new generation, but the market wasn't quite ready for it yet. The next iteration (GF2 and GF4 respectively) made the new technology come into its own.
As a result, a lot of people who already had a fast GF2 would sit out the GF3 and only upgrade when the GF4 arrived.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 142 of 311, by The Serpent Rider

User metadata
Rank l33t
Rank
l33t
Reputator wrote:

In older games you only really saw big gains at higher resolutions.

That's mainly because of the CPU bottleneck in such old games.

I actually compared the two when I looked at 4x2 NVIDIA architectures

You forgot to mention that GeForce 4 had two vertex shader units (three for FX 5900).

Scali wrote:

sometimes better than the GF3.

Only in 16-bit color and/or silly resolutions, and even that was negated by Ti 500.

The GF3 was mostly a 'transitional' product.

Transitional or not, it's still twice as fast and even better than GF4 MX series.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 143 of 311, by Reputator

User metadata
Rank Member
Rank
Member
Scali wrote:
But you mostly compared games that could leverage the shaders and extra multitexturing that the GF3 has to offer, by the looks o […]
Show full quote
Reputator wrote:

I actually compared the two when I looked at 4x2 NVIDIA architectures, and in most tests the NV20 architecture did nearly double the NV15.

But you mostly compared games that could leverage the shaders and extra multitexturing that the GF3 has to offer, by the looks of things (Doom 3, Half-Life 2...).
That's not how the GF3 was originally received (which is why I picked a benchmark from the introduction of the GF3).
Most games were still DX7 and the GF2Ultra did very well in that, sometimes better than the GF3.
By the time there were games that would significantly benefit from the GF3 architecture, the GF4 was already on the market, which had rendered the GF3 irrelevant.
The GF3 was mostly a 'transitional' product. In a way it's very similar to the GeForce256: it offered the functionality of a new generation, but the market wasn't quite ready for it yet. The next iteration (GF2 and GF4 respectively) made the new technology come into its own.
As a result, a lot of people who already had a fast GF2 would sit out the GF3 and only upgrade when the GF4 arrived.

Right, but if we're discussing its legacy, and what value the card will have to collectors and enthusiasts in the near future, then those other features that weren't leveraged at the time of launch will become important. It's interesting you mention the GeForce 256 as having a similar problem, when it's a very sought after card now due to its legacy. Something that wasn't appreciated as much when it was new.

https://www.youtube.com/c/PixelPipes
Graphics Card Database

Reply 144 of 311, by Scali

User metadata
Rank l33t
Rank
l33t

Am I the only one who wonders where this ... fanaticism for the GeForce3 is coming from, and why there's this need to cherry-pick benchmark results and misrepresent them as factual absolute performance across the board, to artificially inflate the alleged capabilities of the GF3?

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 145 of 311, by Scali

User metadata
Rank l33t
Rank
l33t
Reputator wrote:

Right, but if we're discussing its legacy, and what value the card will have to collectors and enthusiasts in the near future, then those other features that weren't leveraged at the time of launch will become important. It's interesting you mention the GeForce 256 as having a similar problem, when it's a very sought after card now due to its legacy. Something that wasn't appreciated as much when it was new.

Well yes, that's why I initially brought up the GeForce3 as a 'generation-defining' card.
The GF3 however was reasonably successful in its short lifespan, where the GeForce256 was a bit of a 'flop' I suppose. So the GF3 may have had a better reputation in its heyday than the GF256.
Initially most reviewers saw the T&L feature and the whole 'GPU'-thing of the GF256 as little more than a marketing gimmick. Which it wasn't of course, but with the lack of software that properly leveraged the capabilities of the GF256, that's the impression they got.
We now live in the 'post-T&L'-age, so we know how wrong these reviewers were, and how awesome GPUs are, and how much of a boost T&L was for geometry detail.
Heck, the term GPU is so common these days that most people don't even seem to have a clue what a GPU really is, or that there were videocards before the GPU. I see the term 'GPU' being used for an entire videocard, even people referring to very old videocards/chips, such as early VGA clones, as 'GPUs'.
For the people who do understand the revolution that was the GPU, the significance of the GeForce256, the world's first GPU (well, that could be argued, but in terms of consumer/gamer-oriented products, surely) is obvious.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 146 of 311, by Reputator

User metadata
Rank Member
Rank
Member
Scali wrote:
Well yes, that's why I initially brought up the GeForce3 as a 'generation-defining' card. The GF3 however was reasonably success […]
Show full quote
Reputator wrote:

Right, but if we're discussing its legacy, and what value the card will have to collectors and enthusiasts in the near future, then those other features that weren't leveraged at the time of launch will become important. It's interesting you mention the GeForce 256 as having a similar problem, when it's a very sought after card now due to its legacy. Something that wasn't appreciated as much when it was new.

Well yes, that's why I initially brought up the GeForce3 as a 'generation-defining' card.
The GF3 however was reasonably successful in its short lifespan, where the GeForce256 was a bit of a 'flop' I suppose. So the GF3 may have had a better reputation in its heyday than the GF256.
Initially most reviewers saw the T&L feature and the whole 'GPU'-thing of the GF256 as little more than a marketing gimmick. Which it wasn't of course, but with the lack of software that properly leveraged the capabilities of the GF256, that's the impression they got.
We now live in the 'post-T&L'-age, so we know how wrong these reviewers were, and how awesome GPUs are, and how much of a boost T&L was for geometry detail.
Heck, the term GPU is so common these days that most people don't even seem to have a clue what a GPU really is, or that there were videocards before the GPU. I see the term 'GPU' being used for an entire videocard, even people referring to very old videocards/chips, such as early VGA clones, as 'GPUs'.
For the people who do understand the revolution that was the GPU, the significance of the GeForce256, the world's first GPU (well, that could be argued, but in terms of consumer/gamer-oriented products, surely) is obvious.

I agree completely. I have to skirt around even calling Rage or Riva series cards "GPUs", because I feel very strongly that those weren't "GPUs" as we know them yet. NVIDIA defined it very clearly, and it's interesting they even included a (very low) threshold of performance, 10mil polys/sec, as a requirement to be designated a GPU.

At any rate, your original point was that the GF3 wasn't as much of a 'leap' compared to other cards, right? When that may have simply been the result of being a little too ahead of games of the time.

https://www.youtube.com/c/PixelPipes
Graphics Card Database

Reply 147 of 311, by Scali

User metadata
Rank l33t
Rank
l33t
Reputator wrote:

At any rate, your original point was that the GF3 wasn't as much of a 'leap' compared to other cards, right? When that may have simply been the result of being a little too ahead of games of the time.

No, I'm a developer myself, and was speaking from a capability point-of-view.
The GF256/GF2 already introduced per-pixel lighting, stencilbuffering, hardware-accelerated shadowmaps and such.
The fact that Doom3 can basically perform all its shadowing and per-pixel lighting even on a GF2 is a testament to how powerful the first GPUs were:
https://youtu.be/pEFWdiWiBa0

The first generation of 'shaders' as in the GF3 couldn't really do all that much more. The main difference was that you could now use 4 textures per pass instead of just 2, so you could use far less renderpasses (Doom3 needs no less than 6 renderpasses on the GF2 if I'm not mistaken), which made things somewhat more efficient. Then again, the raw power and bandwidth of a card like the GF2Ultra made it very fast with multiple renderpasses anyway.
GF3 mainly made things a lot easier to program.
That, and the fact that Doom3 is the only game I know that actually tries to use the GF2 for per-pixel lighting/shadowing, so people mostly associate that sort of thing with GF3+ hardware.
But I implemented per-pixel lighting and shadowing on GF2 as well in my engine at the time, and it worked quite well. Unlike Doom3, my GF2 routines never saw a release though.
I did post this demo as a Flipcode image-of-the-day at the time though: https://youtu.be/3myGIK-7d0E
Here's another one I did with some shadowing and bloom: https://youtu.be/0g5Gkbp0Vn8
And this performs subtraction of objects (CSG), again with the GF2 GPU capabilities: https://youtu.be/PHYF51Asav8
So yea, I loved playing with the early GPU capabilities, it could do a lot more than what most games did at the time. And given that, the GF3 wasn't a very big leap forward.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 148 of 311, by Reputator

User metadata
Rank Member
Rank
Member

Yeah Doom 3 is a ridiculously impressive looking game considering the age of the hardware it will "run" on (as long as you're not looking to actually play). The R100 was even more capable in its pixel shading features, apparently just falling short of the original DX8.0 spec, right?

Your demos are very impressive. They look better than NVIDIA's own per-pixel GeForce 2 lighting demo.

https://www.youtube.com/c/PixelPipes
Graphics Card Database

Reply 149 of 311, by devius

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

Here's another one I did with some shadowing and bloom: https://youtu.be/0g5Gkbp0Vn8

This one looks really good to me. Is it running on a GeForce2?

The link to the binary file seems to be dead however. Do you still have that program?

Reply 151 of 311, by Scali

User metadata
Rank l33t
Rank
l33t
devius wrote:

This one looks really good to me. Is it running on a GeForce2?

Thanks. This version was recorded from a newer system, but yea, it should work on a GeForce2, probably also a GeForce256.
I've uploaded a bunch of old demos here, including the 'glowbal', two versions... stencil shadowed and shadowmapped:
https://www.dropbox.com/sh/5k88v7dvq6vx32r/AA … 0W3NnaoXza?dl=0

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 152 of 311, by Scali

User metadata
Rank l33t
Rank
l33t

It seems the Wiki article on the R100 has an interesting quote on that: https://en.wikipedia.org/wiki/ATi_Radeon_R100_Series

...prior to the final release of DirectX 8.0, Microsoft decided that it was better to expose the RADEON's and GeForce{2}'s extended multitexture capabilities via the extensions to SetTextureStageState() instead of via the pixel shader interface. There are various practical technical reasons for this. Much of the same math that can be done with pixel shaders can be done via SetTextureStageState(), especially with the enhancements to SetTextureStageState() in DirectX 8.0. At the end of the day, this means that DirectX 8.0 exposes 99% of what the RADEON can do in its pixel pipe without adding the complexity of a "0.5" pixel shader interface.
Additionally, you have to understand that the phrase "shader" is an incredibly ambiguous graphics term. Basically, we hardware manufacturers started using the word "shader" a lot once we were able to do per-pixel dot products (i.e. the RADEON / GF generation of chips). Even earlier than that, "ATI_shader_op" was our multitexture OpenGL extension on Rage 128 (which was replaced by the multivendor EXT_texture_env_combine extension). Quake III has ".shader" files it uses to describe how materials are lit. These are just a few examples of the use of the word shader in the game industry (nevermind the movie production industry which uses many different types of shaders, including those used by Pixar's RenderMan).
With the final release of DirectX 8.0, the term "shader" has become more crystallized in that it is actually used in the interface that developers use to write their programs rather than just general "industry lingo." In DirectX 8.0, there are two versions of pixel shaders: 1.0 and 1.1. (Future releases of DirectX will have 2.0 shaders, 3.0 shaders and so on.) Because of what I stated earlier, RADEON doesn't support either of the pixel shader versions in DirectX 8.0. Some of you have tweaked the registry and gotten the driver to export a 1.0 pixel shader version number to 3DMark2001. This causes 3DMark2001 to think it can run certain tests. Surely, we shouldn't crash when you do this, but you are forcing the (leaked and/or unsupported) driver down a path it isn't intended to ever go. The chip doesn't support 1.0 or 1.1 pixel shaders, therefore you won't see correct rendering even if we don't crash. The fact that that registry key exists indicates that we did some experiments in the driver, not that we are half way done implementing pixel shaders on RADEON. DirectX 8.0's 1.0 and 1.1 pixel shaders are not supported by RADEON and never will be. The silicon just can't do what is required to support 1.0 or 1.1 shaders. This is also true of GeForce and GeForce2.

Ties in with what I said earlier: the GF2 can do a lot of 'shader' tricks without SM1.x support. The fixed-function API is flexible enough to make you do per-pixel lighting, shadows, reflection and all that.
And Radeon can even use EMBM with fixed-function (I think the same goes for certain Intel IGPs, and the PowerVR Kyro, possibly others). This requires dependent texture reads, something NV didn't implement until NV20. So it's not necessarily a 'shader'-feature, just something that NV didn't have on any of their pre-shader hardware.
GF3 only supported the simple 'hardwired' dependent texture reads for EMBM-like effects. It was the R200 that first gave you a proper dependent texture read instruction, which finally gave you something that was truly programmable: you could calculate any kind of coordinates, and do texture lookups whenever and however you wanted. GF3's shader instructions still felt a lot like hardwired 'register combiners' as we knew them from earlier GF models.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 153 of 311, by Reputator

User metadata
Rank Member
Rank
Member
silikone wrote:

How was multitexturing on the GF256?

In synthetic tests I found multitexturing increased the texel rate by about 80%. The GF2GTS brought this up to 89%, so surprisingly not a huge gain despite moving to a dual-texturing pipeline setup.

NVIDIA's next single-texturing architecture, NV40, increased the fillrate in multi-texturing tests by 77% in my results. NVIDIA was very good about leveraging multi-texturing, compared to ATI's R300 architecture which only saw around ~46% increase (R420 and up seemed to fix this).

swaaye wrote:

I think 3dfx Rampage was built around whatever "pixel shader 1.0" is.
http://ixbtlabs.com/articles/3dfxtribute/index.html

The original Radeon had something like NVidia shading rasterizer / register combiners (NV1x) but I read an article years ago that described how NV's solution was more useful in the end. I wish I could find that article again but I have no idea what site it was on. On the other hand Radeon could perform EMBM whereas only NV20 onward can do that.

EMBM seems to be another case of infamous "cap bits", in which its support goes back to DX6 but if and when it ever became a minimum requirement, I'm unsure about. Clearly NVIDIA could exclude support for it and still claim full DX6 and 7 compliance, but Matrox and ATI went beyond those requirements.

But obviously that doesn't imply a programmable shader even if its use seems to be more commonly associated with DX8 and up. The extent to which R100's shaders were actually "programmable" will probably remain a mystery, and per Scali's quote seems to be something early drivers dabbled in, but have since buried.

https://www.youtube.com/c/PixelPipes
Graphics Card Database

Reply 154 of 311, by Arctic

User metadata
Rank Oldbie
Rank
Oldbie

I was hoping to read something about Geforce 256 cards...
Maybe you guys should fork it out to a separate thread about GPU politics.

Now Back²topic plz 😎

Are a lot of people here still looking for a Geforce 256?

Reply 156 of 311, by silikone

User metadata
Rank Member
Rank
Member

How rare are 64MB versions? They used one on Anandtech, but I have never seen it before.

Do not refrain from refusing to stop hindering yourself from the opposite of watching nothing other than that which is by no means porn.

Reply 158 of 311, by AzzKickr

User metadata
Rank Member
Rank
Member

Well, that boxed Prophet went for a lot less than I anticipated.

Also, by mere stupidity on my behalf I missed out on this:

http://www.ebay.it/itm/Retro-gamer-vintage-so … D-/112361870387

That would have been awesome to have since I already own the bare card.

Heresy grows from idleness ...