VOGONS


Reply 20 of 35, by bakemono

User metadata
Rank Oldbie
Rank
Oldbie

I noticed that some cards fare badly when running 3Dmark2001 with software T&L or running some old-style OpenGL renderer that uses vertex arrays instead of vertex buffer objects. Like with a Radeon 7500 if you run the high-polygon test you get 18 million triangles/second normally, but with software T&L it drops way down to 3.5 million or something, even with a fast CPU. Whereas with a different video card you can still get maybe 16 million with software T&L, so it can't be limited by just the CPU.

again another retro game on itch: https://90soft90.itch.io/shmup-salad

Reply 21 of 35, by Auzner

User metadata
Rank Member
Rank
Member

Just agreeing with some of the discussion that has unfolded:

kixs wrote:

Good comparison reference.
Because by GF4 & FX, chipsets were improving. Gamers were at 2GHz, DDR, and ATA/66.

Tom's Setups:
AMD Athlon 1000
ABIT KT7A RAID (VIA KT133A)
2 x 128 MB, PC 133 (Turbo/CL2)

AMD Athlon XP 2700+
ASUS A7N8X (nForce 2)
2 x 256 MB, PC 333 (2/2/2/5)

firage wrote:

Crank up the AA/AF filtering and you start to get some return for your GPU power with the newer chips. Maxed out resolutions and anti-aliasing remains the main reason to build a heavily CPU bottlenecked system.

That too. Re-run everything with AA & AF and look at the numbers. 4 & FX were starting to get better image quality options.

Reply 22 of 35, by Woolie Wool

User metadata
Rank Member
Rank
Member
vvbee wrote:

A company known for their hellishly short release cycle under fierce competition wouldn't sit there optimizing for obsolete titles nor prioritizing regressions in them, for which you expect a notable fall in driver quality relative to a static reference.

Indeed. Nobody downloading the latest drivers to try out Far Cry's new HDR patch back in 2004 was going to give a damn about what happens to a six-year-old game, or how well their card stacks up to an older card in those old games in puny resolutions they'd never use. When I upgraded from a GeForce 2 MX to a Radeon 9200 back in the day you better believe i wasn't running anything at 640x480.

wp0kyr-2.png CALIFORNIA_RAYZEN
1wpfky-2.png REDBOX
3q6x0e-2.png FUNKENSTEIN_3D

Reply 23 of 35, by feipoa

User metadata
Rank l33t++
Rank
l33t++

There are way too many variables when retroactively benchmarking graphics cards. This is why I tend to shy away from it and prefer to focus more on CPU and motherboard benchmarking. Even with that, there will always be someone who believes tests should have been done this way or that.

Plan your life wisely, you'll be dead before you know it.

Reply 24 of 35, by havli

User metadata
Rank Oldbie
Rank
Oldbie

Well, if you really want to benchmark only raw GPU power, then it is not so hard - just use as fast CPU as possible + higher resolution... done. That's the approach I preffer. In that case you don't need to worry about drivers CPU overhead and stuff like that. So for example in my methodology when running 1024x768 2xAA GF FX 5800 Ultra is twice as fast compared to Ti 4600.

Phil's test is more like CPU benchmark, because PIII 1000 will bottleneck everything at 640x480, even GF2. So this is completely different approach, but perfectly valid too. It is interesting to see the results, but not that much surprising to see that newer drivers have more CPU overhead - there is so much stuff they must contain in order to work with wide spectrum of video cards and new DX/OGL versions. Sometimes there is even improved multithreading at expense of lower performance on single CPU.

When building a retro PC with overkill on GPU (to allow high resolutions + AA/AF) it is tricky and may backfire. So in the end there is enough GPU power to run hires but can't get enough fps because of driver overhead and relatively weak period-correct CPU. With older GPU the overhead is less limiting but for a change you run into GPU limited situation whet trying to go hires... and end up with poor fps also (but because of different reason). 🤣

HW museum.cz - my collection of PC hardware

Reply 25 of 35, by cyclone3d

User metadata
Rank l33t++
Rank
l33t++

Fro some reason I was thinking he said 500Mhz for the CPU, but I am apparently wrong as it was at 1Ghz.

Anyway, if you look at benches of a lot of those cards with much faster systems and with higher DX versions, you will see a lot more difference.

Back then, different cards and the drivers for that matter were made to work better with the DX versions that were out at that time.

That is why the newer cards are not as good as the older cards when tested on the older DX versions.

And does it really matter for pretty much any of those games that the newer cards were getting less fps in some of the older games? Looks to me like they still have very good FPS.

And yeah, crank up the AA/AF on the newer cards and on the older cards and see what happens.

Yamaha modified setupds and drivers
Yamaha XG repository
YMF7x4 Guide
Aopen AW744L II SB-LINK

Reply 26 of 35, by brostenen

User metadata
Rank l33t++
Rank
l33t++

Yeah... It's a nice video with a lot of information in it. I did however (yet again) get confirmed in my belief, that FX cards are shit. Unless they are the absolute only type of cards one can get a hold on, then keep away from that series of GPU's.

Don't eat stuff off a 15 year old never cleaned cpu cooler.
Those cakes make you sick....

My blog: http://to9xct.blogspot.dk
My YouTube: https://www.youtube.com/user/brostenen

001100 010010 011110 100001 101101 110011

Reply 27 of 35, by squiggly

User metadata
Rank Member
Rank
Member
brostenen wrote:

Yeah... It's a nice video with a lot of information in it. I did however (yet again) get confirmed in my belief, that FX cards are shit. Unless they are the absolute only type of cards one can get a hold on, then keep away from that series of GPU's.

They are great for dx8, the best in fact. But if your experience is limited to dx7 or dx9 with an FX then it's understandabley you would feel they way.

Reply 28 of 35, by Baoran

User metadata
Rank l33t
Rank
l33t

I was just recently thinking about if I should get one of those fx series cards thinking that they might be the fastest graphics card that supported those older features like 8 bit paletted textures but after watching this video I started to doubt things.

Reply 30 of 35, by Katmai500

User metadata
Rank Member
Rank
Member

Phil added a comment on the video that he's doing some additional testing of the cards @ 640x480 with a P4 2.8 Northwood and RDRAM, and seeing some of the same results.

Reply 32 of 35, by RogueTrip2012

User metadata
Rank Oldbie
Rank
Oldbie

Looking at the beginning of the video i see a Quadro FX card. Maybe the 3000? Phil does not state waht hes doing in this case.

Has anyone compared the quadro to geforce fx to be equal using nvstrap?

I have a fx 3000 that i use rivatuner to turn to a gforce fx 5900 ultra it shows but clocks are way lower by default. Even with auto clocking i cant get close to the real clocks of the 5900.

> W98SE . P3 1.4S . 512MB . Q.FX3K . SB Live! . 64GB SSD
>WXP/W8.1 . AMD 960T . 8GB . GTX285 . SB X-Fi . 128GB SSD
> Win XI . i7 12700k . 32GB . GTX1070TI . 512GB NVME

Reply 33 of 35, by Putas

User metadata
Rank Oldbie
Rank
Oldbie
RogueTrip2012 wrote:

Looking at the beginning of the video i see a Quadro FX card. Maybe the 3000? Phil does not state waht hes doing in this case.

Has anyone compared the quadro to geforce fx to be equal using nvstrap?

I have a fx 3000 that i use rivatuner to turn to a gforce fx 5900 ultra it shows but clocks are way lower by default. Even with auto clocking i cant get close to the real clocks of the 5900.

Looks like FX 3000, but even if clocks were lower with his setup the card cannot pull ahead.

Reply 34 of 35, by RogueTrip2012

User metadata
Rank Oldbie
Rank
Oldbie
Putas wrote:
RogueTrip2012 wrote:

Looking at the beginning of the video i see a Quadro FX card. Maybe the 3000? Phil does not state waht hes doing in this case.

Has anyone compared the quadro to geforce fx to be equal using nvstrap?

I have a fx 3000 that i use rivatuner to turn to a gforce fx 5900 ultra it shows but clocks are way lower by default. Even with auto clocking i cant get close to the real clocks of the 5900.

Looks like FX 3000, but even if clocks were lower with his setup the card cannot pull ahead.

Something I noticed about my Quadro FX 3000 with the 45.23's in Win98se.

Go to the nvidia control panel and look at AA and its set to Application.

Now open rivatuner 2.24 and go to the Direct3D driver settings and look at AA tab. Its enabled by default with 2x?! (I installed without and with the NVStrap more than once and 2x has been enabled each time) I wonder if this happens on other FX cards and maybe the GF4TI cards?

I would like to test my card soon. just got to install 3dmark2k soon.

> W98SE . P3 1.4S . 512MB . Q.FX3K . SB Live! . 64GB SSD
>WXP/W8.1 . AMD 960T . 8GB . GTX285 . SB X-Fi . 128GB SSD
> Win XI . i7 12700k . 32GB . GTX1070TI . 512GB NVME

Reply 35 of 35, by appiah4

User metadata
Rank l33t++
Rank
l33t++

What this does not explore of course is IQ. These were times when IQ was generally sacrificed in “optimizations” for speed later unwinded when they were no longer relevant to contemporary benchmarks and to facilitate a false sense of progress and planned obsolescence. So while later drivers may be faster they may also be the only ones putting the real picture on your screen.

Retronautics: A digital gallery of my retro computers, hardware and projects.