VOGONS


First post, by Kahenraz

User metadata
Rank l33t
Rank
l33t

I was playing around with my GeForce FX 5600 and ran 3DMark 99 on a Mendocino Celeron 533Mhz and then a fast Pentium 4 3Ghz.

On the Celeron, the benchmark starts at about 35 FPS but dips down to about 25 FPS as the spaceships fly over the hill at the start of the benchmark. On the Pentium 4, it starts off at about 180 FPS, dipping to about 135 FPS over the hill. This test ran about the same on a GeForce FX 5200 and 5900 on the Mendocino.

This just goes to show that there is a lot of untapped power available to this generation of cards which may simply be left on the table when paired with a processor that is too slow to take advantage of it.

For those who don't know, a TNT2 will easily outperform an FX 5200 on a slower processor. Newer graphics cards require later drivers which have a very high CPU overhead. This is one of the reasons "older" drivers are preferred on these vintage CPUs. The performance gap doesn't begin to close until Coppermine, with the introduction of SSE instructions.

My FX 5600 scored 2675 3DMarks and 4245 CPU 3DMarks on the Celeron, and 12540 3DMarks and 36140 CPU 3DMarks on the Pentium 4, for reference.

Last edited by Kahenraz on 2024-03-25, 01:25. Edited 3 times in total.

Reply 1 of 8, by paradigital

User metadata
Rank Oldbie
Rank
Oldbie

It’s hardly a surprise that a slow CPU that is 5 years older than the GeForce FX series bottlenecks the heck out of it. We were still well within the 12-18 months made significant changes to the landscape era of PC speed increases back in 2003, so the fact that a meagre Celeron ruins the output of even a modest FX series GPU isn’t even remotely surprising.

Don’t forget that the FX series shares a launch year with the Athlon 64!

Reply 3 of 8, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++

It is game design which still ties them. If you are just doing GPU compute on anything that will do OpenCL or Cuda, hashing, crunching or whatever, you can use the minimal amount of CPU you can put on a board that runs the cards and you will get same results as having the fastest.... possibly better if power and heat are factors.

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.

Reply 5 of 8, by mcyt

User metadata
Rank Newbie
Rank
Newbie

Games were always more or less designed to take advantage of the CPU and GPU power available at the time and maybe looking ahead a couple years, and the latter started around the time that 3D accelerators became a thing at all. 3D benchmark apps were usually (especially in the case of 3DMark) designed not really to showcase what a particular card could do but how an overall system would perform with present-day, or near-future-day, games. So, pretty much the same thing.

Every system is different and every game too. Some games may be CPU-constrained on your system, some GPU-constrained. That's as true now as it was 20 years ago. Developers can't predict how absolutely everybody is going to configure their systems at any given time, but the wild card is the CPU and GPU manufacturers themselves... who often release hardware advertised as "current" but that's really last-gen hardware at low cost. The FX 5200 was one such GPU; it was worse than the GeForce 4 MX series (already a budget card series in the GeForce 4 line more in line with the GeForce 3) even at the time of its release. This card was really intended for OEMs to put in systems so they could advertise having an "FX" card. But it was in no way comparable to the "real" FX cards like the 5800 and 5900. There's not really even a modern-day equivalent; Nvidia does still make a lot of lower-end graphics cards today but they don't put them in the current "RTX 40" series. They put them in a lower series (30 series, 16 series, etc.), which was an option I guess they hadn't thought of in the FX era.

So it's not surprising that a better FX card will be CPU-constrained in some games and/or benchmarks, even while an FX 5200 is clearly GPU-constrained in those same games/benchmarks (assuming the same CPU). The software was designed around what was assumed to be an average performance CPU/GPU combo at the time, but the balance can obviously be off in one way or another.

If what you're saying is that you shouldn't assume all FX cards are equal, I do agree with that. There was a period there when you really could not assume even current-gen performance from current-gen Nvidia products unless you knew your model numbers. Some would argue that's also true of the 4060 today, but as a 4060 owner *and* an owner of older Nvidia cards from when things were really, very different, I disagree 😀

Reply 6 of 8, by rasz_pl

User metadata
Rank l33t
Rank
l33t

yeah, try something with Shader Model 2.0. You know, the feature (DirectX 9) Nvidia advertised in this generation? 😀
Or run comparison test against GF4 for a laugh.

Open Source AT&T Globalyst/NCR/FIC 486-GAC-2 proprietary Cache Module reproduction

Reply 7 of 8, by ciornyi

User metadata
Rank Member
Rank
Member

Well it's not faster , especially when you run something dx9 related . Running old 3dMark and made conclusions based on it isn't correct .

DOS: 166mmx/16mb/Y719/S3virge
DOS/95: PII333/128mb/AWE64/TNT2M64
Win98: P3_900/256mb/SB live/3dfx V3
Win Me: Athlon 1700+/512mb/Audigy2/Geforce 3Ti200
Win XP: E8600/4096mb/SB X-fi/HD6850

Reply 8 of 8, by Kruton 9000

User metadata
Rank Newbie
Rank
Newbie
rasz_pl wrote on 2024-03-26, 03:28:

Or run comparison test against GF4 for a laugh.

GeForce FXs starting from 5700 are faster than any of GF4s if CPU is fast enough (fast P4 or A64) . The difference is bigger with better quality.