VOGONS


First post, by kanecvr

User metadata
Rank Oldbie
Rank
Oldbie

Hi guys. I the last 3-4 of months I've been benchmarking and collecting data on nvidia and ATi video cards made between 1999 and 2003, and today I'm going to release the results for the AGP 4x and 8x nvidia cards, with ATi cards and earlier video chipset tests (Riva 128, Rage 128) to come at a later date. Let's start off with the test machine configuration:

Test System:

CPU: Athlon 64 3400+ (socket 754, NewCastle core) @ 2.4GHz, 512KB L2 Cache
Mainboard: MSI K8MM-V (VIA K8M880) Socket 754
RAM: 2x 512MB Kingmax DDR400
PSU: Highpower (Sirtec) 500W PSU
HDD: 80GB Maxtor SATA HDD
Sound Card: Aopen PCI Studio AW320 (crystal 4xxx)

3Quk0PKl.jpg

Software used:

Windows XP SP3
Nvidia Forceware 66.93 (43.45 for the TNT2 Ultra)
VIA 4 in 1 5.24
3D Mark 2000
3D Mark 2001
Quake 3
Unreal Gold
Dungeon Keeper 2

The Cards:

HFk9wRuh.jpg
kl6mF6Gh.jpg
N8ChECZh.jpg

Test methodology - I went trough quite a few issues with the test system configuration. At first I wanted a machine with universal AGP, and the KT333 / Barton 3200+ was the way to go - until I noticed some of the cards did not play well with my Shuttle or Gigabyte KT333 boards, and to top it all off, some cards were bottlenecked by the CPU. I was getting around ~11000 to 12000 pts with the Ti4200, Ti4600, FX 5700, FX5900XT and Radeon 9700 plus the 9800 PRO cards, and Quake 3 results at 640x480 got eerily similar form the Geforce 3 Ti500 upwards - so I decided to replace that machine with the socket 754 rig described above. We can still see CPU bottlenecking in some tests, but it's not as obvious as with the socket A rig. I will however be using the socket A machine with my 2333Mhz 3200+ to test 3DFX cards as well as other cards that require 3.3V AGP.

I tried to use both OpenGL and Direct3D titles for benchmarking, and included 3dmark 2000 and 2001 for synthetic test performance. I would have liked to get more games in this test, but It's pretty hard to find games with a built-in benchmark, and benchmarking games with fraps gives unreliable results. Unreal was run at 800x600 / 16 bit, with everything set to high. Quake 3 Arena was benchmarked at both 640x480 / default settings as well as 1280x1024 / 32 bit, to try and get around that CPU bottleneck, and show witch card is better suited for running the game all out. Dungeon Keeper ran at 1024x768 / 16 bit color and everything maxed out trough registry hacks.

Results:

FSD7zhmh.png

dERc4T5h.png

There's not much to say here. Both versions of 3D Mark show the same perfomance difference across generations and models, with the older TNT2 Ultra, Geforce 2 MX and Geforce 256 showing rather poor performance in DirectX 8.1 gaming at 1024x768.

The 6200 performs as expected. Especially being the 64 bit version. The 128 bit version would have done a lot better - probably someware between the FX 5200 and the FX5700. The 6800LE tops the charts in both benchmarks, with the unlocked card taking a small lead. The 6800 isn't really a DX7/8 card, it really shines in DX9 - but I did not run any DX9 benchmarks because most of the cards would run 3dmark 2003 in a slide-show fashion - if at all.

0pNtdxbh.png

As you can see all cards perform well in Quake 3 at 640x480 / 16 bit color, the game being playable on all of them. We can see the Geforce 256 taking a small lead from the much higher clocked (but crippled) Geforce 2 MX. The Geforce 2 like (except for the MX) performs pretty much the same in this test, showing the limitations of the architecture. My GF 2 PRO sample has slightly higher clocked memory then the GTS so it takes a small lead, trailing the much higher clocked GF2 Titanium by very little. The Geforce 4 MX 440 performs really well in this tests - the card tested is clocked at 275 GPU / 400 vRAM and uses DDR SGRAM as opposed to DDR SDRAM (basically higher clocked versions of computer RAM memory) that most MX 440 cards use. Not to mention this card more closely follows the reference MX 440 clocks, while most other MX 440 cards I've seen (especially the passively cooled ones) are clocked lower at 250 / 400 and even 225 / 333, some using a 64 bit memory bus. While technically a Geforce 2 core card, the MX440 performs in Quake 3 close to the GF 3 Ti500 witch should be by all means a much faster card.

Bfrg32Vh.png

As you can see above, the GF 4 MX440 is crippled when compared with the Geforce 2 Titanium, witch has twice the number of fragment pipelines, texture units and raster operators. It has an extra 20Mhz core clock, and both cards use DDR SGRAM clocked at 200MHz (400 effective) - but the GF 4 MX 440 manages to outpace the GF 2 Ti not only in Quake 3 at 640x480, but in 3D Mark 2000 and 2001 as well. What gives? Theoretically the Geforce 2 Ti should be twice as fast.

jjHlsLU.png

At 1280x1024 we can see the similar results, but this time the gap widens, and the MX 440 is 20% faster then the Geforce 2 Ti... It's then safe to conclude that if you want to build a capable 2000's gaming machine, using a quality 128bit Geforce 4 MX 440 will yield better performance, and save you some $$$ since the Geforce 2 Ti is quite rare and can be pretty expensive compared to the MX 440 witch you can get for 2-5$ in most cases. In fact in more then one test, this MX 440 card comes close to the Geforce 3 Ti 200 - again, a somewhat rare and possibly expensive card.

The 128 bit FX 5200 card tested performs close to the MX 440, so it's also a viable option for cheap retro computing - just make sure that the cards you buy are not crippled versions with the 64 bit memory bus. This is pretty easy to do if the seller has pictures, since cheaper 64 bit cards have less ram chips (4 vs 😎, have a smaller PCB and are usually passively cooler (in the case of the GF4 since most FX5200 non-ultra cards are passive).

The TNT2 Ultra does not offer a playable experience at this resolution, and the GF 256 SDR and GF 2 MX deliver borderline playable framerates.

The FX 5900XT takes a HUGE lead at 1280 x 1024 in Quake 3, and has the highest score by a wide margin in 3DMark 2001. From what I've tested, the 5900XT is by far the best Direct X 8.1 card. While it is technically a DX9 card, it performs rather poorly in the latter compared to the Radeon 9700 and 9800 (as we will see in later benchmaks) so if you want to build a DX 9 machine, go for team RED, or look for a Geforce 6600 - although the 6xxx cards don't play nice with some machines - namely socket A / socket 478 rigs, while the 9800 PRO is pretty painless to install, use and maintain, while offering more performance then those platforms can handle. Now back to the 5900XT - old reviews badmouth the card for it's poor DX 9 performance, but for 1999-2002 games, as almost all support DX 8.1 and lower. It's perfomance in DX 7 and lower is also above any other tested card, further reinforcing my choice.

XOrCEtih.png

Things in Unreal are... weird. The fastest card in this test is the Geforce 4 Ti 4200, witch doesn't make sense. If the GF 4 Ti architecture is the best performer in this game, then the much higher clocked Ti 4600 should be the winner no? But that's not the case. The Ti 4200 manages the top score, while the GF 3 Ti 200 is trailing in second place, beating the GF 3 Ti 500, the GF 4 Ti 4600 and the FX 5700.

Now Unreal is the only benchmark where the GF 4 MX 440 did not outpace it's Geforce 2 relatives - in fact it places in between the GF 2 GTS and the GF 2 MX.

As we go past the GF 3 series, we can see that newer cards don't perform much better - this is possibly due to the CPU bottleneck I mentioned above.

3SOxxXZh.png

Dungeon Keeper II likes fast vRAM and favors the FX series architecture. The FX 5900XT has a commanding lead here. If I were to increase the resolution to 1600x1200, we would see an even bigger margin in favor of the 5900XT and the Ti4600, with the older GF 2, GF 3 (except for the Ti 500) and the GF 4 MX delivering poor framerates, but due to time constraints and the need to constantly hack the registry to get the game to run at that resolution after changing video cards, I opted not do include those results in this benchmark. The TNT2 Ultra flat-out refused to run the game on anything other then 640x480, so it gets 0 points. This can be seen in 3dmark 2000 as well, where it would not run the default benchmark.

The 6200 severely unperformed in DK2 because of it's 64 bit bus - and DK2 likes wide buses and fast ram. The 128bit version would have done a lot better in this game, but I don't own a 128 bit 6200 unfortunately.

The 6800 tops all charts both in 8p and 16p configuration. It's most notable in games, particularly DK2 witch loves high bandwidth fast ram.

ATi cards used in test:

- HP Rage 128 Fury 32MB 128 bit 143c / 143m
- Hercules Radeon 7000 64MB 64 bit 166c / 166m
- Dell Radeon 7200 (Radeon SDR) 32MB 128 bit 166c / 166m
- Sapphire Radeon 8500 LE 64MB 250c / 500m
- Hercules Radeon 9000 128MB 128 bit 275c / 400m
- GeCube Radeon 9550XT 128MB DDR (600MHz) 128 bit, 400Mhz Core
- Asus Radeon 9600XT 500c / 600m Mhz
- Club3D Radeon 9700 128Mb flashed to 9500 - 128 bit 275c / 540m MHz
- Club3D Radeon 9700 128MB 256 bit 275c / 550m MHz
- Hercules Radeon 9800 PRO 128MB 256 bit 398c / 702m MHz
- BBA Radeon 9800 XT 128MB 256 bit 425c / 720m MHz
- Gigabyte Radeon X800XT AGP 256MB 256 bit 500c / 1000m

Software used:

Windows XP SP3
ATi Catalyst 6.2 (except for the rage 128 witch uses a much older driver)
VIA 4 in 1 5.24
3D Mark 2000
3D Mark 2001
Quake 3
Unreal Gold
Dungeon Keeper 2

g25Ue7Cl.jpg

The Radeon 8500 (non-LE) and the X800 PRO were not benchmarked. I had issues with the 8500 (artefacting) and found a missing SMD capacitor near one of the memory modules, as for the X800PRO, the fan failed.... I'll possibly take the time to replace it with one of those huge chinese coolers, but since I keep the card on display I'd kind of prefer to keep the stock cooler on. Anyway, sitcker with (dead cooler, DO NOT USE) was added to the back of the card. Now on to testing!

As I mentioned previously, the MSI mainboard I did most of my nvidia testing with did not like some of my video cards... instability, blue screens and very poor performance (6700 for the Radeon 8500 vs the 10k it should have gotten) so I had to retest half of the ATi cards and some of the nvidia cards using another (more stable) platform. This time the test system:

CPU: Athlon 64 3800+ (socket 939, Venice core) 2400MHz 512KB of L2 cache - HT bus was set to 800MHz to emulate a socket 754 CPU
Mainboard: ECS KV2 Extreme (VIA K8T890)
RAM: 1GB DDR400 (1 stick, single channel as on the socket 754 platform)
PSU: Highpower (Sirtec) 500W PSU
HDD: 80GB Maxtor SATA HDD
Sound Card: Aopen PCI Studio AW320 (crystal 4xxx)

Now the main difference between the socket 754 and 939 platforms are faster HT bus on the latter (1000MHz for 939 vs 800MHz for 754) and dual channel support for 939 chips. I used a single ram stick for the 939 machine, and set the HT bus speed from BIOS to 800MHz. I ran some tests and both machines scored withing 1-2% of each other, witch I deemed as an acceptable margin of error. I wish I could have used the original machine for testing, but it flat-out refused to play nice with 1/3 of the tested video cards, regardless of brand.

The CPUs are identical, sans for the 3800+ having a dual channel memory controller and a 200Mhz faster HT bus:

s939 3800+ => www.cpu-world.com/CPUs/K8/AMD-Athlon%20 ... WBOX).html
s754 3400+ => http://www.cpu-world.com/CPUs/K8/AMD-Athlon%2 … 3400AIK4BO.html

So now that that's sorted out, let's look at some charts! First off, 3dmark 2000 and direct X 7 performance:

FEpGb8Ih.png

It's interesting to note that from the 9700 upwards there's little to no score increase in this test due to CPU limitations. The 9800 PRO scores on par with the FX 5900XT, witch is to be expected. The Rage 128 and Radeon 7000 do very bad in this test, witch is expected due to their running a 64 bit memory bus, and both being entry level cards. They are not completely useless tough, as we will see in other benchmarks. The Radeon SDR (7200) performs on par with the Geforce 256 SDR, yielding a similar score. In fact these cards performed very closely across all games / benchmarks, with the radeon having an overall affinity for openGL games.

A3GtZAxh.png

The Rage 128 and the Radeon 7000 (radeon VE) prove useless in 3dmark 2001.The 7200 (radeon SDR) manages to outpace the geforce 256 sdr here. The radeon 9500 (re-flashed 9700) manages to beat both the 9600XT and the 9550XT, scoring somewhere in between the FX 5700 and the 5900XT, as we will see in the centralized charts later on.

HW5iehMh.png

At 640x480 all cards deliver playable framerates, including the rage 128, but something odd happens is this benchmark... yup - the 8500LE is as fast as the Radeon 9500 at this resolution in this game, beating both RV360 chip cards (the 9550XT and the 9600XT). I don't know exactly what this happens, but around 390 fps seems to be the CPU limit for this game at this resolution, and the 8500 comes close. The fastest card here was the 9700 , closely trailed by the 9800 cards and the x800xt witch scored in between the 9800 pro and 9500. All cards with the exception of the rage 128 used the same driver (Catalyst 6.2) and the exact same settings - I made sure of it, and re-ran the benchmark 3-4 times to get an average.

9ohElAhh.png

Things drastically change as we move on to higher resolutions. Here clock speeds, memory bus width and vertex / texture unit count start to make a visible difference, as the X800XT is at the top of the chart sporting a small lead over the 9800 cards. In 2nd place are the R300 based Radeon 9500 and 9700, followed by the RV360 based 9600XT and 9550XT witch score within a small margin of each other, despite the 75mhz core clock speed difference. The game is unplayable at this resolution on the Rage 128 and the Radeon VE (7000), but playable on the Radeon SDR, witch again scores along the line of the Geforce 256.

J0OkZowh.png

Unreal again shows the limitations of the platform. In fact most cards perform very well in unreal at 800x600 / default settings. The 8500LE manages to impress, getting one of the top 3 scores in this benchmark - a situation similar to quake 3 at 640x480. I still don't know why this happens, but it happened with the nvidia cards as well, with the Geforce 3 Ti 500.

tZExrwhh.png

Dungeon Keeper 2 shows it's affinity for cards with a wide memory bus and fast ram. The R300 and R350 based cards take a considerable lead in this benchmark.

Overall, the most surprising card in this test was the 8500 witch at low resolutions is up there with the big boys. Switching to higher resolutions shows, but I believe that if the 8500 had a better memory controller or a wider memory bus it would have done better there as well. The "artificial" 9500, alltought a halved R300 chip still manages to outperform the 9550 and 9600xt in all tests, although running at much lower clocks. Perhaps the extra 4 ROP's and 2 extra vertex pipelines it has over the RV360 based 9550 and 9600 are enough to compensate.

1wCf4Cyh.png

The Rage 128 is about as fast as a TNT2 M64 - it's suitable for 640x480 and 800x600 gaming tops. It also displayed some weird ugly alpha dithering in dungeon keeper II, but I believe that's driver related. The Radeon VE (radeon 7000) is not really a gaming card. Performance is lacking due to the fact that it is a halved Radeon 7200/7500 chip, coupled with the slow 64 bit bus. It would do fine in a Pentium II or K6-2 machine, but in anything faster it would become a bottleneck.

The radeon 9000 is a cut-down version of the R200 Radeon 8500, and it shows in benchmarks. My sample is clocked 25MHz higher then the 8500LE it ran against, and despite that it performs worse in all tests. here's why:

R1FM3oth.png

As you can see the 8500LE has 4 more texture units and 1 more vertex unit, and it shows in the benchmarks. Frankly I don't know why ATi named the cards "radeon 9000", when it would have been more appropriate to name it "radeon 8000" to reflect it's performance. Alltough branded a 9xxx series card, it's direct X 8.1 compliant just like the 8500 and brings nothing new to the table. The radeon 9200 and 9250 are AGP 8X versions of the RV250 radeon 9000 - as such I did not include them in the test, as my 9200 sample is lower clocked (250/400) then my 9000 pro (270/500) and would have yielded even poorer performace. In theory a radeon 9200 or 9250 with the same 270 mhz core and 500MHz clocked memory should perform just like the 9000 pro in this review, maybe 1% better due to it being AGP 8x compliant - BUT as far as I know all 9200/9250 cards are clocked at 250/400, making the 9000 the better choice. Again ATi chose a higher number for a slower card, witch is confusing and in some cases down right annoying. 9200 cards often times also use a 64 bit memory bus, making them even slower, so if you do see such a card in the wild, make sure it's a 64 bit model.

Overall the 128 bit 64 or 128mb Radeon 9000 PRO is a decent card, and would do OK in a fast pentium III or early socket A machine. the 8500 shines when used in faster systems - as you can see it can bottleneck a rather fast socket 939 rig at low resolutions. I guess the best fit for it would be a mid-range socket A or socket 478 machine.

I was expecting to have driver touble with the ATi cards, but it was actually pretty smooth saling - and unlike nvidia cards where newer drivers broke compatibility with older games, and newer video cards (like the 6800) have some issues with some early games at high resolutions or in 16 bit mode, the X800XT performed perfectly in everything I tried to run on it. In fact I would warmly recommend it, and the X800 PRO as a alternative to the rarer and more expensive Geforce 6800 AGP. It has win98 drivers as well, and unlike the 6800's win98 drivers witch feel buggy and unfinished (the XP drivers are perfectly fine) Catalyst 6.2 did not cause any compatibility issues with any game, and I did not need to downgrade or look for alternatives.

Last edited by kanecvr on 2017-01-31, 17:25. Edited 15 times in total.

Reply 1 of 45, by kanecvr

User metadata
Rank Oldbie
Rank
Oldbie

Here are the overall results, with all brands / models, sorted by performance:

k6yWbFch.png

The fastest card in Unreal Gold is the X800XT, followed closely by the 9700 and the 8500LE. While the X800XT performs as expected, I'm stumped by the 9700's performance, as the 9800 series should be faster since they run off a similar (if not identical) core, but are clocked higher. I'm chucking this one to drivers, as I believe the 9700 driver might be better optimized for older games - can't think of another explanation. The 8500LE is another huge suprise - as it manages to outpace the 9800 cards as well as the 6800 and the 5900XT. I know for a fact that the nvidia cards performed worse in this test due to a driver issue, as the geforce 3/4 Ti series get over 10% better results using Forceware 43.XX under win98, and I vaguely remember testing the 5900XT with forceware 56.XX and it scored much better - the problem is both those drivers bring a slew of compatibility issues. 43.xx has horrid performance in 3dmark2001 with newer cards and quake 3 crases at 1280x1024 with others. Dugeon Keeper II shows a black screen on anything newer then a GF3 Ti on my test system with that driver - witch is why it was chosen for the tests. Also please consider that testing all cards with the fastest driver for each game would be an enormous rectal pain, so that was out of the question. In general the 6800 does not do well in older games that requier DX 6 and in some cases early DX7 games, regardless of driver - and by that I mean there are some compatibility issues with some games - like a black screen in DKII (sound plays) when using some forceware 7x.xx drivers, as well as some texture problems in quake 3. In general I wouldn't recommend this card for older games as it will only run correctly (at least on my machines) with the forceware 66.31 or 66.93 drivers and those don't offer the best performance, unless you run DX9 stuff.

Now back to the 8500 - I actually recorded a clip with my phone with it running unreal with timedemo on, because I could not believe it kicked the 6800's ass... yeah... again, I'm putting this down to driver optimization and "right card for the game" factor.

More weirdness can be observed bu comparing the Geforce 3 and 4 titanium series numbers - here the slower clocked cards performed better... I don't know what to make of this. I actually re-ran the benchmarks for the GF3 Ti200 and Ti500 cards and results were more or less the same every time. In fact the Ti500 was a tiny bit faster then the Ti4600. If it wasn't for an increase (small but still there) in numbers as I started testing newer cards, I would have blamed a CPU bottleneck - but since the X800XT got 30 more FPS then say the Ti4200, I'm not sure that's the case.

mBsO51Uh.png

Like I said before - Dungeon Keeper II is a rather demanding game. If you play at 1024x768, allmost any card can run it - but if you kick it up to 1600x1200 and turn shadow detail up to 3, you will need at least a Ti4600 to get a minimum framerate of 30... it's a pretty good looking game, but I have a feeling that like Homeworld it's a pretty poorly optimised one. It also crave CPU power and a wide video memory bus. All the 256 bit bus cards are on top, while under them cards line up by clock speed or texture unit count.

UNXtzTCh.png

Quake 3 at 640x480 displays more weirdness... again the 9700 is on top - in fact the fastest card tested, despite being lower clocked then the 9800 while having the exact same core. The 9800 series trail it very very closely with about 1 FPS difference, and the X800XT follows after. While I did use Catalyst 6.2 for all ATi cards, it's possible that ATi packaged an earlyer 9500/9700 driver in it w/o making any changes, witch would explain the card's performance in older games. Just for kicks I tried some direct X 8.1 games (namely freelancer and es: morrowind) and the 9800/x800xt are quite a bit faster then the 9700 in these.

6kmtqb6h.png

This chart starts to make sense. While at 640x480 ATi cards dominated, here the unlocked 6800LE with 16/6p overclocked to 400 / 850 takes the lead. As expected the 9800 cards beat the FX 5900XT - in fact I later discovered they also beat a FX 5900 Ultra (soft-modded Quadro FX3000) witch I did not include in the benchmarks because it scores extremly close to the FX5900XT in all benchmarks. What's also interesting is that here we get to see how powerful the Geforce 4 Ti4600 is, as it beats both the 5700XT and the 9600XT - mid range cards one generation newer.

nrhAHqwh.png

UurIngBh.png

It's interesting to note that the Radeon SDR and the Geforce 256 SDR are neck and neck in most benchmarks and games. I wonder if the Radeon DDR (7500) and the Geforce 256 DDR benchmark results are as close. Unfortunately I don't have either to test...

The results speak for themselves. It's worth noting that nvidia card results are heavily influenced by driver version, with performance figures variations of as much as 25% for some cards. This is most notable in unreal. It's possible that the higher end / newer nvidia cards would have gotten better performance with the same drivers at a higher resolution, but do consider the sheer amount of time installing drivers, messing with hardware and testing to gather these results takes - I'm not looking forward to re-doing them.

I will be updating this thread much much later with 32 bit 1600x1200 results in all games, and I will just be testing the faster cards - but first I have to find some time to test the 3dfx cards as well...

Last edited by kanecvr on 2017-02-01, 03:07. Edited 19 times in total.

Reply 2 of 45, by kanecvr

User metadata
Rank Oldbie
Rank
Oldbie

Reserved for 3DFX Cards.

3DFX Cards used in tests:

- 3DFX Voodoo 4MB (Skywell Magic 3D), 60c / 60m, 128 bit
- 3DFX Voodoo 2 (single card) 12MB (Skywell Magic 3D II), 110c / 110m, 192 bit
- 3DFX Voodoo 2 SLi 2x12MB, (Creative 3D Blaster II), 100c / 100m, 2x 192 bit
- 3DFX Voodoo Banshee (Diamond Monster Fusion) 16MB SGRAM, 125c / 125m, 128 bit
- 3DFX Voodoo 2 2000 (STB) 16MB SDRAM, 143c / 143m, 128 bit
- 3DFX Voodoo 3 3000 (STB) 16MB SDRAM, 166c / 166m, 128 bit
- 3DFX Voodoo 3 3500 (unknown) 16MB SDRAM, 183c / 183m, 128 bit
- 3DFX Voodoo 4 4500 (PowerColor EvilKing 4), 166c / 166m, 128 bit
- 3DFX Voodoo 5 5500 (3dfx), 32MB x2, 166c / 166m, 2x 128 bit

Test system:

CPU: Athlon XP 3200+ (512kb, Barton core) 2333Mhz, FSB 333
Mainboard: Shuttle AK35GT2/R (VIA KT333a)
RAM: 1GB Kingmax DDR400
HDD: 40GB Maxtor IDE
Sound Card: Aureal Vortex 2
PSU: 350W FSP Group (80+ certified)

Software:

- Windows 98 Second Edition
- Latest 3dfx release driver for said cards
- Voodoo2 Tweaker V1.1
- VoodooControl V1.82 Beta

Last edited by kanecvr on 2017-01-18, 14:31. Edited 4 times in total.

Reply 3 of 45, by meljor

User metadata
Rank Oldbie
Rank
Oldbie

Nice work! Keep em coming, will study this later.

asus tx97-e, 233mmx, voodoo1, s3 virge ,sb16
asus p5a, k6-3+ @ 550mhz, voodoo2 12mb sli, gf2 gts, awe32
asus p3b-f, p3-700, voodoo3 3500TV agp, awe64
asus tusl2-c, p3-S 1,4ghz, voodoo5 5500, live!
asus a7n8x DL, barton cpu, 6800ultra, Voodoo3 pci, audigy1

Reply 4 of 45, by clueless1

User metadata
Rank l33t
Rank
l33t

Beautiful, man. Great work. Maybe the weirdness in Unreal is related to driver optimization. I wonder if the FX cards would pull ahead with an 81.xx series driver?

The more I learn, the more I realize how much I don't know.
OPL3 FM vs. Roland MT-32 vs. General MIDI DOS Game Comparison
Let's benchmark our systems with cache disabled
DOS PCI Graphics Card Benchmarks

Reply 5 of 45, by SPBHM

User metadata
Rank Oldbie
Rank
Oldbie

very nice collection and testing; regarding the GF2ti vs MX440, I remember having discussions about it when the 440 was new; the 440 is helped by the clock advantage, and it's architecture is improved it's not an exact Geforce 2, the memory bandwidth efficiency is probably significantly better, and that's how the 2 pipelines card beats the 4 one very often, it's probably a "bottleneck" (memory performance).

one thing I noticed on your collection is that you used one Chinese cooler I was considering for my 9500PRO in some cards, specifically the FX5900, I think that card had a temperature sensor right? if you checked, how does the temperature look with that cooler? and is it to noisy? thanks.

Reply 6 of 45, by kanecvr

User metadata
Rank Oldbie
Rank
Oldbie
SPBHM wrote:

very nice collection and testing

clueless1 wrote:

Beautiful, man. Great work.

meljor wrote:

Nice work! Keep em coming, will study this later.

Thank you, I appreciate it.

SPBHM wrote:

very nice collection and testing; regarding the GF2ti vs MX440, I remember having discussions about it when the 440 was new; the 440 is helped by the clock advantage, and it's architecture is improved it's not an exact Geforce 2, the memory bandwidth efficiency is probably significantly better, and that's how the 2 pipelines card beats the 4 one very often, it's probably a "bottleneck" (memory performance).

one thing I noticed on your collection is that you used one Chinese cooler I was considering for my 9500PRO in some cards, specifically the FX5900, I think that card had a temperature sensor right? if you checked, how does the temperature look with that cooler? and is it to noisy? thanks.

The chinese cooler works well. They come in two sizes - one fits the Riva TNT / Geforce 2/3/4MX, the other fits the 5900/6600 and so on. They also fit equivalent ATi cards. Both coolers are very very quiet and are quite a bit better then the stock solution. They are of surprisingly good build quality too, from the heatsink to the ball bearing fan. I use them on cards with insufficient (radeon 9800XT) or defective (Club3D FX5900 / Hercules GF 3 Ti500) coolers. With the large version chinese cooler the 5900xt does about 65-67C tops. The mounting systems differ as well - the small ones use classic plastic spring backed clips, while the large one uses a bolt-trough metal system it comes with.

clueless1 wrote:

Maybe the weirdness in Unreal is related to driver optimization. I wonder if the FX cards would pull ahead with an 81.xx series driver?

I tried later drivers - DK II won't run with 81.xx drivers, and neither will some of the older cards in the test. The FX series gets a slight improvement in opengl, but loses some DX8 perfomance with those drivers. After weeks of research, I came to belive 66.93 and it's win9x equivalent are the best forceware drivers for this spread of video cards.

The Geforce 2 cards are actually a bit faster (5%) with fw 43.xx drivers, so if you use anything older (including) then a gf2, use those.

Reply 7 of 45, by willow

User metadata
Rank Member
Rank
Member
kanecvr wrote:
Reserved for 3DFX Cards. […]
Show full quote

Reserved for 3DFX Cards.

3DFX Cards used in tests:

- 3DFX Voodoo 2 SLi 2x12MB, (Creative 3D Blaster II), 100c / 100m, 2x 192 bit
- 3DFX Voodoo Banshee (Diamond Monster Fusion) 16MB SGRAM, 125c / 125m, 128 bit
- 3DFX Voodoo 2 2000 (STB) 16MB SDRAM, 143c / 143m, 128 bit
- 3DFX Voodoo 3 3000 (STB) 16MB SDRAM, 166c / 166m, 128 bit
- 3DFX Voodoo 3 3500 (unknown) 16MB SDRAM, 183c / 183m, 128 bit
- 3DFX Voodoo 4 4500 (PowerColor EvilKing 4), 166c / 166m, 128 bit
- 3DFX Voodoo 5 5500 (3dfx), 32MB x2, 166c / 166m, 2x 128 bit

Test system:

CPU: Athlon XP 3200+ (512kb, Barton core) 2333Mhz, FSB 333
Mainboard: Shuttle AK35GT2/R (VIA KT333a)
RAM: 1GB Kingmax DDR400
HDD: 40GB Maxtor IDE
Sound Card: Aureal Vortex 2
PSU: 350W FSP Group (80+ certified)

Software:

- Windows 98 Second Edition
- Latest 3dfx release driver for said cards
- Voodoo2 Tweaker V1.1
- VoodooControl V1.82 Beta

Could you test with voodoo 1?

Nice work.

Reply 8 of 45, by kanecvr

User metadata
Rank Oldbie
Rank
Oldbie

I can, but I belive even a single V2 is not enough for most of the games / tests, let alone a V1 - but I'll include de V1 out of shared curiosity 😜

Reply 9 of 45, by Tetrium

User metadata
Rank l33t++
Rank
l33t++

This is absolutely amazing work! Very well done!

And I always knew those GF4MX graphics cards were pretty good, I really loved all those people who wanted to throw them out 😁

Could you perhaps send me a message where I could obtain these Chinese coolers? I'm actually kinda in need of new ones that are not crap or need to be cannibalized from other cards 😒

And I presume with the defective FX5900 cooler that your specific FX5900 stock cooler was actually malfunctioning and not that the stock cooler is insufficient or has a design problem that decreases the life of these cards? If so, I'd need to replace the stock cooler of my FX5900 because I'd like to prevent it from dying prematurely 🤣

Nicely done! 😁

Whats missing in your collections?
My retro rigs (old topic)
Interesting Vogons threads (links to Vogonswiki)
Report spammers here!

Reply 11 of 45, by Putas

User metadata
Rank Oldbie
Rank
Oldbie
kanecvr wrote:

As you can see above, the GF 4 MX440 is crippled when compared with the Geforce 2 Titanium, witch has twice the number of fragment pipelines, texture units and raster operators. It has an extra 20Mhz core clock, and both cards use DDR SGRAM clocked at 200MHz (400 effective) - but the GF 4 MX 440 manages to outpace the GF 2 Ti not only in Quake 3 at 640x480, but in 3D Mark 2000 and 2001 as well. What gives? Theoretically the Geforce 2 Ti should be twice as fast.

You could just read old reviews to find out. Performance of graphics chip does not depend only on fillrates and your resolutions are often way too low to put newer ones in to use.

Looking forward to Vanta numbers.

Reply 13 of 45, by candle_86

User metadata
Rank l33t
Rank
l33t
Putas wrote:
kanecvr wrote:

As you can see above, the GF 4 MX440 is crippled when compared with the Geforce 2 Titanium, witch has twice the number of fragment pipelines, texture units and raster operators. It has an extra 20Mhz core clock, and both cards use DDR SGRAM clocked at 200MHz (400 effective) - but the GF 4 MX 440 manages to outpace the GF 2 Ti not only in Quake 3 at 640x480, but in 3D Mark 2000 and 2001 as well. What gives? Theoretically the Geforce 2 Ti should be twice as fast.

You could just read old reviews to find out. Performance of graphics chip does not depend only on fillrates and your resolutions are often way too low to put newer ones in to use.

Looking forward to Vanta numbers.

It the memory clock, the reason the MX440 does so well is that it has the Geforce3 Crossbar memory controller so it can actually use all it's bandwidth. The Geforce 2 cards did not have a crossbar memory controller so where very inefficent with their memory. If they had made the Geforce 2 with one it would look very different and the cards would surely compete and often beat the Geforce 3 in older games simply because that core clockspeed comes into play suddenly

Reply 14 of 45, by clueless1

User metadata
Rank l33t
Rank
l33t
sf78 wrote:

Very nice. I remember back then people talking about FX 5200 being a POS and indeed it really is. 🤣

Wait, so a card that's getting at least 119FPS in these benchmarks, and performs about on par with a GF3 Ti200 is a POS? I have slightly different parameters for that category. 😉

The more I learn, the more I realize how much I don't know.
OPL3 FM vs. Roland MT-32 vs. General MIDI DOS Game Comparison
Let's benchmark our systems with cache disabled
DOS PCI Graphics Card Benchmarks

Reply 15 of 45, by Rhuwyn

User metadata
Rank Oldbie
Rank
Oldbie
clueless1 wrote:
sf78 wrote:

Very nice. I remember back then people talking about FX 5200 being a POS and indeed it really is. 🤣

Wait, so a card that's getting at least 119FPS in these benchmarks, and performs about on par with a GF3 Ti200 is a POS? I have slightly different parameters for that category. 😉

But it's 2 whole generations newer then the Geforce 3 Ti200 and the Ti200 is not the fastest card of that generation. I would have expected more out of it as well.

Reply 16 of 45, by elianda

User metadata
Rank l33t
Rank
l33t

Your performance discussion about the GF4 MX lacks that the card employs Light Speed Memory Architecture II, which is critical as memory bandwidth was always the bottle-neck of the GF2 series.
Also GF4 MX460 results would be nice to have as this card has the same core clock as a GF4 Ti 4600 and the same memory bandwidth as a GF4 Ti4400. Typicaly it performs about 10% faster than a GF4MX440.

Retronn.de - Vintage Hardware Gallery, Drivers, Guides, Videos. Now with file search
Youtube Channel
FTP Server - Driver Archive and more
DVI2PCIe alignment and 2D image quality measurement tool

Reply 17 of 45, by candle_86

User metadata
Rank l33t
Rank
l33t
elianda wrote:

Your performance discussion about the GF4 MX lacks that the card employs Light Speed Memory Architecture II, which is critical as memory bandwidth was always the bottle-neck of the GF2 series.
Also GF4 MX460 results would be nice to have as this card has the same core clock as a GF4 Ti 4600 and the same memory bandwidth as a GF4 Ti4400. Typicaly it performs about 10% faster than a GF4MX440.

They are also rare, they didn't sell well being the same price of a Geforce 3 but under preforming it, thats why it was discontinued so quickly

Reply 18 of 45, by Tetrium

User metadata
Rank l33t++
Rank
l33t++
clueless1 wrote:
sf78 wrote:

Very nice. I remember back then people talking about FX 5200 being a POS and indeed it really is. 🤣

Wait, so a card that's getting at least 119FPS in these benchmarks, and performs about on par with a GF3 Ti200 is a POS? I have slightly different parameters for that category. 😉

^Agreed. I've used cards to "emulate" other cards before (mostly using GF2MX to "emulate" GF1).
Btw I really liked how my GF3 ti200 performed. I never actually had an FX5200 till like last year or so? So I never ended up using one, but I do suppose GF3 Ti200 might be more practical when it comes to what games it can support.

To me a POS is something that's actually almost completely useless and is simply not worth even trying. Imo even Cyrix MII is at least 10 steps above POS and virtually nobody seemed to like them till retrocomputing started taking off (along with CPU collectors saving them from all getting melted)

Another nice example would be VIA C3.

Whats missing in your collections?
My retro rigs (old topic)
Interesting Vogons threads (links to Vogonswiki)
Report spammers here!

Reply 19 of 45, by kanecvr

User metadata
Rank Oldbie
Rank
Oldbie
Tetrium wrote:

This is absolutely amazing work! Very well done!

Thank you 😀

Tetrium wrote:
And I always knew those GF4MX graphics cards were pretty good, I really loved all those people who wanted to throw them out :D […]
Show full quote

And I always knew those GF4MX graphics cards were pretty good, I really loved all those people who wanted to throw them out 😁

Could you perhaps send me a message where I could obtain these Chinese coolers? I'm actually kinda in need of new ones that are not crap or need to be cannibalized from other cards 😒

And I presume with the defective FX5900 cooler that your specific FX5900 stock cooler was actually malfunctioning and not that the stock cooler is insufficient or has a design problem that decreases the life of these cards? If so, I'd need to replace the stock cooler of my FX5900 because I'd like to prevent it from dying prematurely 🤣

Nicely done! 😁

I buy the chinese coolers locally (romanian ad site) - but I'm pretty sure you can find them on ebay. Like I said, they come in 2 (3 actually) sizes. One has smaller hole spacing for older graphics cards, the other has larger hole spacing, has a metal retention mecanism and is twice the size of the smaller one, and there's also a very large version made of copper instead of aluminum, witch also comes with VRM and ram sinks, and is designed for the geforce 8xxx / 9xxx and radeon 2xxx / 3xxx / 4xxx series.

The stock cooler on my Club3d 5900xt was bad. It's a single slot solution made of aluminum with little to no fins or mass. The fan was quite small, noisy and inefficient. The card would do 75-80C easily.

This looks like the small version of the cooler I have on the GF3 Ti500 and the GF2 MX: http://www.ebay.com/itm/DC12V-2-Pin-Connector … sAAAOSwymxVMhQI

and this:

http://www.ebay.com/itm/Small-QQ-VGA-Video-Ca … vEAAOSwGotWsQt9

This one looks interesting - looks like full copper. Haven't used it myself: http://www.ebay.com/itm/Universal-43-50-53-55 … f4AAOSwpDdVbWTw

This is the larger version I have mounted on my FX 5900XT and my 9800XT cards: http://www.ebay.com/itm/2PIN-QQ-CORAL-VGA-Coo … wMAAOSwARZXmRKQ

feipoa wrote:

Could you add a GeForce 6200 to the lineup?

Yes, I have a 128mb card I can (and will) add to the test, together with my 6800LE. Unfortunatly my only 6800XT AGP card is now dead, and don't have a replacement (they are really expensive here) and haven't really tried to source a full 6800 card because they are pretty rare here, and I have plenty of X800 PRO / XT AGP cards witch are cheap and perfrom better.

Putas wrote:

You could just read old reviews to find out. Performance of graphics chip does not depend only on fillrates and your resolutions are often way too low to put newer ones in to use.

Couldn't find anything relevant unfortunately.

Putas wrote:

Looking forward to Vanta numbers.

I'll add a major update tomorrow, including the vanta, tnt2 pro, 6200 and 6800LE (both in 12 and 16pp flavors). I don't have an AGP 6600 card to test, but I might rig something up using a PCIe board, a dual-core 4400+ locked to one core, HT turned down to match the 754 version of the chip, single channel memory and one of my 6600GT PCI-E cards. Might, it's a bit of a pita.

ATi cards are coming tonight, but I'm having issues with my radeon SDR (7200)... regardless of driver and bios settings I get a black screen after installing the card's driver. It works fine w/o it. Similar issues to the radeon VE, but I got that working with fast writes off and catalyst 3.7...

sf78 wrote:

Very nice. I remember back then people talking about FX 5200 being a POS and indeed it really is. 🤣

well not really... it equals the Ti4200 in some tests, and the MX 440 in others (namely 3dmark). Not really a bad card for DX8 gaming, but if I wanted to run at 1600x1200 I'd go for a Ti 4600, or the FX 5700, or the 5900XT. as for overall performance ***SPOILERS*** the 9800PRO and the X800XT beat the crap out of everything else tested so far. Maybe the 6800LE @ 16pp will fair better.

Wait till you get a load of the 6200 😁

candle_86 wrote:

It the memory clock, the reason the MX440 does so well is that it has the Geforce3 Crossbar memory controller so it can actually use all it's bandwidth. The Geforce 2 cards did not have a crossbar memory controller so where very inefficent with their memory. If they had made the Geforce 2 with one it would look very different and the cards would surely compete and often beat the Geforce 3 in older games simply because that core clockspeed comes into play suddenly

Did not know that, thanks 😀.

EDITED - I was hoping to post ATi video card results tonight, but after glancing on my collection of 3dmark 2001 result screen captures (to reference the results I got with this rig) I noticed the Radeon 7000, 7200, 8500 and 9000 cards were underscoring severly. In fact my 8500 got less points (and worse framerate) on this build then it did on my Tualatin build.... I have to redo some of the benchmarks on my 939 machine. It's using a 3800+ single core that also runs at 3400MHz, and I can remove one ram stick to get it to run in single channel, and decrease HyperThreading speed to emulate a socket 754 machine. It also uses a VIA chipset.