VOGONS


GeForce 4 vs. GeForce FX?

Topic actions

Reply 60 of 217, by mockingbird

User metadata
Rank Oldbie
Rank
Oldbie
RandomStranger wrote on 2021-09-03, 05:54:

@o'Doyle
https://www.youtube.com/watch?v=XVO3NJCPIoY

The numbers speak for themself.

System:
Core2 E5800 @ 3.2Ghz
2GB DDR @ 400Mhz Dual Channel
Driver 45.23 for nVidia
Driver 10.2 for ATI

Benchmark:
-Quake 3 version 1.32c
-DEMO001
-1280x1024 resolution
-All settings maxxed
-For 16-bit tests, color depth and texture quality were changed to 16-bit
-Test run 3 times

Radeon 9700 (Dell OEM)
32-bit
170.7
170.8
170.7

16-bit
173.1
173.3
173.3

GeForce FX5800 (Quadro 1000 modified with Rivatuner bootstrap method and clocked to 400/800 with Coolbits)
32-bit
245.4
245.4
245.4

16-bit
290.6
294.1
293.9

Scale these numbers down to the early 2000s, with sub-Gigahertz machines -- why the heck should someone choose to pay the premium for the ATI product just for DX9 support? The whole selling point was HL2, which had its source code hacked and released way past launch date anyhow. You could get a LOT more longevity out of your system with an FX card.

The Radeon 9700 has a weak VRM compared to the 5800. The 5800 has good quality, expensive polymer caps that are still good. The 9700 had Nichicon HC series (decent, but not adequate for a modern VGA VRM) which I already replaced years ago before storing the card.

Even if the 9700 was on-par performance-wise with the 5800, look at that massive performance gain achieved by going to 16-bit. And the 9700 had weird graphical glitches in Quake3 at 16-bit, though the benchmark did complete without issue.

Now one of you mavens please explain to me again how the FX generation of cards was a disappointment, and how great ATI was with their "revolutionary" 9700 series.

mslrlv.png
(Decommissioned:)
7ivtic.png

Reply 61 of 217, by bloodem

User metadata
Rank Oldbie
Rank
Oldbie

TL;DR
FYI, I got 256 FPS with a Radeon 9700 PRO in Quake 3 @ 1280 x 1024 (all settings maxed out). Granted, the 9700 PRO is faster than the 9700, but definitely not 50% faster. 😀
I'm guessing it's because you are using the much too new Catalyst 10.2 driver (released in 2010), which was optimized for newer, more modern titles.
On the other hand, for the nVIDIA card you are using a more period-correct driver from 2003.

1 x PLCC-68 / 2 x PGA132 / 5 x Skt 3 / 9 x Skt 7 / 12 x SS7 / 1 x Skt 8 / 14 x Slot 1 / 5 x Slot A
5 x Skt 370 / 8 x Skt A / 2 x Skt 478 / 2 x Skt 754 / 3 x Skt 939 / 7 x LGA775 / 1 x LGA1155
Current PC: Ryzen 7 5800X3D
Backup PC: Core i7 7700k

Reply 62 of 217, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

Yeah, I can't still believe that nVidia was capable of going that far with cheating, especially with such reputable benchmark.

Actually, they've both cheated here and there. In 3DMark2001, certain driver versions from Nvidia and ATi simplify tree leaves animation. If you look closely, they have jerky movement.

Funnily enough, Nvidia driver, which was shipped with Windows XP SP3, is one of those "optimized" ones.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 63 of 217, by leileilol

User metadata
Rank l33t++
Rank
l33t++

The "nvidia favoring" Q3 is only due to r_primitives 0's buggy extension detecting behavior (done so because nvidia did CVAs badly in earlier Detonators 🤣). Set that to 1 on ATI. (....and later nvidia cards 🤣)

apsosig.png
long live PCem

Reply 64 of 217, by mockingbird

User metadata
Rank Oldbie
Rank
Oldbie
bloodem wrote on 2021-09-03, 18:00:
TL;DR FYI, I got 256 FPS with a Radeon 9700 PRO in Quake 3 @ 1280 x 1024 (all settings maxed out). Granted, the 9700 PRO is fast […]
Show full quote

TL;DR
FYI, I got 256 FPS with a Radeon 9700 PRO in Quake 3 @ 1280 x 1024 (all settings maxed out). Granted, the 9700 PRO is faster than the 9700, but definitely not 50% faster. 😀
I'm guessing it's because you are using the much too new Catalyst 10.2 driver (released in 2010), which was optimized for newer, more modern titles.
On the other hand, for the nVIDIA card you are using a more period-correct driver from 2003.

That sounds about right. Keep in mind that my "9700 TX" as Wikipedia refers to it only has a 263/263 clock, whereas your fully-fledged 9700Pro is 325/310.

Having said that, I just re-ran my test with Catalyst 7.7 and I got the following results:

183.8
184
184.7

So that's approximately only 15fps of an increase with the period correct driver.

Also, my card is dying. I just noticed the artifacting in the Windows XP loading screen, and when I ran the benchmark, Quake had major artifacting (though it did complete without issue). Either it's dying or it doesn't like this much newer system, I should put it in my old BX motherboard to verify.

mslrlv.png
(Decommissioned:)
7ivtic.png

Reply 65 of 217, by cyclone3d

User metadata
Rank l33t++
Rank
l33t++
mockingbird wrote on 2021-09-03, 21:22:
That sounds about right. Keep in mind that my "9700 TX" as Wikipedia refers to it only has a 263/263 clock, whereas your fully- […]
Show full quote
bloodem wrote on 2021-09-03, 18:00:
TL;DR FYI, I got 256 FPS with a Radeon 9700 PRO in Quake 3 @ 1280 x 1024 (all settings maxed out). Granted, the 9700 PRO is fast […]
Show full quote

TL;DR
FYI, I got 256 FPS with a Radeon 9700 PRO in Quake 3 @ 1280 x 1024 (all settings maxed out). Granted, the 9700 PRO is faster than the 9700, but definitely not 50% faster. 😀
I'm guessing it's because you are using the much too new Catalyst 10.2 driver (released in 2010), which was optimized for newer, more modern titles.
On the other hand, for the nVIDIA card you are using a more period-correct driver from 2003.

That sounds about right. Keep in mind that my "9700 TX" as Wikipedia refers to it only has a 263/263 clock, whereas your fully-fledged 9700Pro is 325/310.

Having said that, I just re-ran my test with Catalyst 7.7 and I got the following results:

183.8
184
184.7

So that's approximately only 15fps of an increase with the period correct driver.

Also, my card is dying. I just noticed the artifacting in the Windows XP loading screen, and when I ran the benchmark, Quake had major artifacting (though it did complete without issue). Either it's dying or it doesn't like this much newer system, I should put it in my old BX motherboard to verify.

It may behave perfectly well on an older system as it will be CPU limited on an older system and thus not stressing the card as much.

Better cooling may help it on the newer system.

Yamaha modified setupds and drivers
Yamaha XG repository
YMF7x4 Guide
Aopen AW744L II SB-LINK

Reply 66 of 217, by kjliew

User metadata
Rank Oldbie
Rank
Oldbie
vorob wrote on 2021-09-02, 06:03:

It’s a problem of nglide on fx or on modern video cards also? Cause I though Zeus fixed absolutely everything in glide emulation 😀

I don't have that answer. It was from the results from testing QEMU Glide pass-through with nGlide on modern video cards. Since I didn't have the issue with dgVoodoo2 Glide, I guessed the pass-through was OK. And, same issue on OpenGlide and Zeck GlideWrapper, otherwise OpenGlide would have been great for Undying. Perhaps the previous poster who played Undying with nGlide on GeForce4 or FX can check it out for us. It should be quick & easy.

BitWrangler wrote on 2021-09-02, 02:13:

Are rearview mirrors working in driving games? NFS series etc

The rearview mirrors in NFS series (2SE, 3 Hot Pursuit, 4 High Stake & 5 Porsche) aren't using the same rendering technics I believe. Even OpenGLide & Zeck GlideWrapper work without any issue.

Reply 67 of 217, by swaaye

User metadata
Rank l33t++
Rank
l33t++

I feel like chiming in!! 😁

I like the R300 cards for the better 16-bit dithering quality, and their superior anti-aliasing and anisotropic filtering. They had NV beaten with AA & AF until G80 came out. Well, unless you like to mess with the hidden NV SSAA modes. And of course if you want to play Shader Model 2 D3D games the FX cards aren't very appealing.

For most OpenGL games and old game compatibility in general, it's NVidia. Zeckensack's Glide wrapper (OpenGL) works best on NV. DGVoodoo1 (D3D 9) works rather nicely with ATI R300+. I don't really worry much about where they all fit in from a performance standpoint aside from the FX cards being useless for most PS2.0 games. From the GF4 vs FX standpoint, I gravitate to the FX cards because they tend to work just as well as a GF4 for things and also have the D3D 9 hardware curiosities to play with.

Oh and the pre-R300 Radeons have even more interesting 16-bit dithering, and so do the R500 Radeons.

Reply 68 of 217, by leileilol

User metadata
Rank l33t++
Rank
l33t++

Yeah, pre-R300 Radeons had the option to have some FloydSteinberg-like dithering as an alternate option to the usual, vertical kernel ordered dithering. It can bring some filmgrain like tastes.

nVidia's 16x16 dithering matrix is well... what the hell is this supposed to be? a Restore Down button? (snapped from my FX5200, and it's been the same since Riva128)

Attachments

  • nvdither.png
    Filename
    nvdither.png
    File size
    442 Bytes
    Views
    2570 views
    File license
    Fair use/fair dealing exception

apsosig.png
long live PCem

Reply 69 of 217, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

and anisotropic filtering

Yes and no. While R300 has more samples for anisotropic filtering, it's noticeably worse under certain angles. FX series don't have such problem.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 70 of 217, by swaaye

User metadata
Rank l33t++
Rank
l33t++
leileilol wrote on 2021-09-04, 05:27:

Yeah, pre-R300 Radeons had the option to have some FloydSteinberg-like dithering as an alternate option to the usual, vertical kernel ordered dithering. It can bring some filmgrain like tastes.

nVidia's 16x16 dithering matrix is well... what the hell is this supposed to be? a Restore Down button? (snapped from my FX5200, and it's been the same since Riva128)

Yeah, NVidia's dithering seems to be the bare minimum to get the job done. A sort of static dithering effect that can cause a rather muddy image sometimes.

The Serpent Rider wrote on 2021-09-04, 05:49:

Yes and no. While R300 has more samples for anisotropic filtering, it's noticeably worse under certain angles. FX series don't have such problem.

Yeah that's true. With the GF6 and 7 I think NV's optimizations cause more problems. If you set them to High Quality mode they look better though. I remember Oblivion needing this to fix the snow appearance.

With the FX cards it will also depend on the driver release because NV kept trying to get more speed with filtering optimizations. From what I gather the earliest drivers are the best quality.

The Radeon X1000 series have that awesome HQ AF mode though. From what I've seen it is superior to many of their Radeon HD cards (less aliasing) and similar to G80+.

Reply 71 of 217, by mockingbird

User metadata
Rank Oldbie
Rank
Oldbie
bloodem wrote on 2021-09-03, 18:00:
TL;DR FYI, I got 256 FPS with a Radeon 9700 PRO in Quake 3 @ 1280 x 1024 (all settings maxed out). Granted, the 9700 PRO is fast […]
Show full quote

TL;DR
FYI, I got 256 FPS with a Radeon 9700 PRO in Quake 3 @ 1280 x 1024 (all settings maxed out). Granted, the 9700 PRO is faster than the 9700, but definitely not 50% faster. 😀
I'm guessing it's because you are using the much too new Catalyst 10.2 driver (released in 2010), which was optimized for newer, more modern titles.
On the other hand, for the nVIDIA card you are using a more period-correct driver from 2003.

mockingbird wrote on 2021-09-03, 21:22:

Also, my card is dying. I just noticed the artifacting in the Windows XP loading screen, and when I ran the benchmark, Quake had major artifacting (though it did complete without issue). Either it's dying or it doesn't like this much newer system, I should put it in my old BX motherboard to verify.

I really have to say, the 9800Pro does in fact crush the GeForce FX... I obtained one in rough shape and cleaned it up real nice, it's a perfectly good card, 380/340 clocks, 128MB.

In Q3A for the aforementioned benchmark, I am pulling around 260FPS with everything maxxed out, and 300 or so FPS if I turn down the texture bit depth and color bit depth to 16-bit. So I don't know exactly what they meant when they said the Radeon did not perform better when dropped to 16-bit color, I got a 40fps performance increase... this is with Catalyst 6.3. Maybe something changed with the 9800Pro compared to the 9700 series in that regard.

And this thing is keyed for 3.3V AGP (AGP 2x)... So it's a very tempting choice for use in an older system, that is, if it wasn't that expensive... They're not as common to find in good working order like a GeForceFX at a reasonable price (paid $20 or so for it, and another $20 for a Geforce2 GTS, both in rough shape and restored successfully).

Next project will be to see if it can perform as well as the FX when it's on a significantly slower system...

mslrlv.png
(Decommissioned:)
7ivtic.png

Reply 72 of 217, by stef80

User metadata
Rank Member
Rank
Member
mockingbird wrote on 2022-01-04, 01:29:

And this thing is keyed for 3.3V AGP (AGP 2x)... So it's a very tempting choice for use in an older system, that is, if it wasn't that expensive... They're not as common to find in good working order like a GeForceFX at a reasonable price (paid $20 or so for it, and another $20 for a Geforce2 GTS, both in rough shape and restored successfully).

9800Pro is actually pretty common card to find ... prices can be high, though.
There are some good alternatives also keyed for 3.3V that can be found for better price: Radeon 9500Pro, Radeon 9700 TX, FireGL X1-128.

Regarding Undying and nGlide problem with mirror reflection that was mentioned earlier in thread, wasn't that fixed long time ago?
https://www.zeus-software.com/forum/viewtopic.php?t=218

Or they have regression in code?

Reply 73 of 217, by bloodem

User metadata
Rank Oldbie
Rank
Oldbie
mockingbird wrote on 2022-01-04, 01:29:

In Q3A for the aforementioned benchmark, I am pulling around 260FPS with everything maxxed out, and 300 or so FPS if I turn down the texture bit depth and color bit depth to 16-bit. So I don't know exactly what they meant when they said the Radeon did not perform better when dropped to 16-bit color, I got a 40fps performance increase...

Who said that dropping to 16-bit color does not help? 😀

Dropping to 16-bit can always help, as long as both of the following are true:
1. there is no CPU bottleneck
2. video memory bandwidth becomes the limiting factor.

Let's say you are using an earlier Athlon XP, which can do ~ 200 - 250 FPS in Quake 3. Obviously, in this scenario, with a Radeon 9800 PRO, you will still be CPU bound even at very high 32-bit color resolutions. So... dropping to 16-bit will not increase your performance.

However, if you are using a Core 2 Duo with the same Radeon 9800 PRO, and you test at 1920 x 1440, there will surely be a difference between 16-bit and 32-bit.

mockingbird wrote on 2022-01-04, 01:29:

Next project will be to see if it can perform as well as the FX when it's on a significantly slower system...

It depends how slow you go. If you go with a very slow system (i.e. SS7/K6-2), both the FX and the Radeon 9800 PRO will give much worse results than a GeForce 2 GTS (with an early driver). Not only that, but stability of these newer cards might also be a problem on SS7.

1 x PLCC-68 / 2 x PGA132 / 5 x Skt 3 / 9 x Skt 7 / 12 x SS7 / 1 x Skt 8 / 14 x Slot 1 / 5 x Slot A
5 x Skt 370 / 8 x Skt A / 2 x Skt 478 / 2 x Skt 754 / 3 x Skt 939 / 7 x LGA775 / 1 x LGA1155
Current PC: Ryzen 7 5800X3D
Backup PC: Core i7 7700k

Reply 74 of 217, by forteller

User metadata
Rank Newbie
Rank
Newbie

Well... testing Radeons in Q3A is just part of the story. I have created an environment where GF4Ti 4200 outperformed significantly Radeon X850 Pro just by putting huge CPU bottleneck:
LINK
(Catalyst 10.2 used for Radeon unless stated otherwise)

I am not saying that it is not important - a lot of games used id Tech 3. I am just saying that having definitive conclusions about those cards basing just on Q3A (and using setup that puts Nvidia ahead even more by using newer drivers on ATi) is not objective IMO. Nvidia is faster in OpenGL in general, but vast majority of games were developed in DirectX by 2002.

2017: 7800X@4,6G / X299 / 32GB / GTX 1080 / SM961 256GB+2x256GB RAID0 / G710+ / G402 / U2713H
2003: P4 2,8C@3,4G / IS7 / 2GB / AIW9700Pro / 160GB+2x40GB RAID0 / SK-8000 / IMO 1.1A / G200
2000: K6-3+@600M / 591P / 384MB / Voodoo3+1 / GUS+AWE32 / 40GB

Reply 75 of 217, by mockingbird

User metadata
Rank Oldbie
Rank
Oldbie
bloodem wrote on 2022-01-04, 13:57:

Who said that dropping to 16-bit color does not help? 😀

This was the case with early Radeons, and I remember it well... I spent a lot of money for a Radeon 64 and not being able to get the boost in 16-bit color was a tremendous disadvantage later on... In retrospect, keeping the Riva TNT for another year and waiting for the Geforce 4 MX440 would have been a better choice.

From an old review discussing Radeon 64's 16-bit Performance:

"It is not until one notices the very small performance decrease between 16-bit and 32-bit color does one see why the Radeon can perform so well at 32-bit color. Be it a function of the drivers or the chip's architecture, the Radeon is just not meant to run in 16-bit color."

forteller wrote on 2022-01-04, 14:00:

Well... testing Radeons in Q3A is just part of the story. I have created an environment where GF4Ti 4200 outperformed significantly Radeon X850 Pro just by putting huge CPU bottleneck:
LINK
(Catalyst 10.2 used for Radeon unless stated otherwise)

Please don't use "four" for benchmarking.

I appreciate your work, but I can't take your results as an apples-to-apples comparison to the figures posted earlier on... Demo "four" runs significantly faster than DEMO001, and DEMO001 should be the de jure standard for Q3A benchmarking.

I am attaching DEMO001 in DM_68 format which works with the Q3A 1.32 release.

Filename
1.17_demo001.dm_68.zip
File size
96.97 KiB
Downloads
61 downloads
File license
Public domain

mslrlv.png
(Decommissioned:)
7ivtic.png

Reply 76 of 217, by Standard Def Steve

User metadata
Rank Oldbie
Rank
Oldbie
forteller wrote on 2022-01-04, 14:00:
Well... testing Radeons in Q3A is just part of the story. I have created an environment where GF4Ti 4200 outperformed significan […]
Show full quote

Well... testing Radeons in Q3A is just part of the story. I have created an environment where GF4Ti 4200 outperformed significantly Radeon X850 Pro just by putting huge CPU bottleneck:
LINK
(Catalyst 10.2 used for Radeon unless stated otherwise)

I am not saying that it is not important - a lot of games used id Tech 3. I am just saying that having definitive conclusions about those cards basing just on Q3A (and using setup that puts Nvidia ahead even more by using newer drivers on ATi) is not objective IMO. Nvidia is faster in OpenGL in general, but vast majority of games were developed in DirectX by 2002.

Those of you wanting top performance out of your Radeon 9600 to X800 in an older system should definitely check out Catalyst 4.12. That version has remarkably low system overhead; far lower than "fan favorites" like 6.2 and 10.2.

Testing Q3A and 3DMark2000 on a 9800Pro, the performance was nearly identical to that of a GF4 Ti4400 (61.76) in CPU-limited scenarios, indicating early-NVIDIA levels of driver efficiency!

Some Q3A demo001 numbers on WinXP SP3 with Cat 4.12. All done at 1024x768 windowed, 32-bit color/textures, full details, sound on.

Radeon 9800 Pro, PII-300MHz, i440BX, 512MB PC100 CL2-2-2-6
31.4 FPS

P2-300-Q3A-9800Pro.png
Filename
P2-300-Q3A-9800Pro.png
File size
733.85 KiB
Views
2199 views
File license
Public domain

Same motherboard, but with a PIII-850
110.5 FPS

P3-850-Q3A-9800Pro.png
Filename
P3-850-Q3A-9800Pro.png
File size
598.33 KiB
Views
2199 views
File license
Public domain

PIII-S 1628MHz, Apollo Pro 266T, 2GB DDR310 CL2-2-2-5
220.2 FPS

PIII-1628, 9800Pro-Q3A.png
Filename
PIII-1628, 9800Pro-Q3A.png
File size
1.39 MiB
Views
2199 views
File license
Public domain

94 MHz NEC VR4300 | SGI Reality CoPro | 8MB RDRAM | Each game gets its own SSD - nooice!

Reply 77 of 217, by Putas

User metadata
Rank Oldbie
Rank
Oldbie
mockingbird wrote on 2022-01-04, 22:01:

This was the case with early Radeons, and I remember it well... I spent a lot of money for a Radeon 64 and not being able to get the boost in 16-bit color was a tremendous disadvantage later on... In retrospect, keeping the Riva TNT for another year and waiting for the Geforce 4 MX440 would have been a better choice.

From an old review discussing Radeon 64's 16-bit Performance:

"It is not until one notices the very small performance decrease between 16-bit and 32-bit color does one see why the Radeon can perform so well at 32-bit color. Be it a function of the drivers or the chip's architecture, the Radeon is just not meant to run in 16-bit color."

Please don't call it that, it is Radeon 256 if you need to add numbers.

Even Radeon 8500 benefits greatly from 16-bit colors, once you enter high resolutions and AA.

Reply 79 of 217, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

9250 is Radeon 9000 (not pro version) and usually has only 64-bit bus. Slower than Radeon 8500 in any case.

I must be some kind of standard: the anonymous gangbanger of the 21st century.