VOGONS


upgrade from gf4 to gf fx rational or waste of money and time.

Topic actions

Reply 60 of 64, by shevalier

User metadata
Rank Oldbie
Rank
Oldbie
appiah4 wrote on Today, 15:07:

I own an FX 5800 Ultra and I have absolutely NO positive impression of the FX series.

The core-to-DDR memory frequency ratio for the 5700 is 425:250 = 1.7 (1.35 for the 5500 and 1.2 for the 5600).
Try lowering the DDR2 memory frequency on your 5800 Ultra to 200 MHz (from the original 250, which is synchronous with the core frequency).
Your “NO positive impression” will simply turn into rage.

Aopen MX3S, PIII-S Tualatin 1133, Radeon 9800Pro@XT BIOS, Audigy 4 SB0610
JetWay K8T8AS, Athlon DH-E6 3000+, Radeon HD2600Pro AGP, Audigy 2 Value SB0400
Gigabyte Ga-k8n51gmf, Turion64 ML-30@2.2GHz , Radeon X800GTO PL16, Diamond monster sound MX300

Reply 61 of 64, by douglar

User metadata
Rank l33t
Rank
l33t
shevalier wrote on Today, 18:01:

Your “NO positive impression” will simply turn into rage.

So the async memory clocks probably hurt memory latency, yes?

Does this sound like a pretty good summary of what went wrong with the FX line?

  • The FX series used much deeper shader execution pipelines than were common at the time. Execution could halt for many clock cycles if required instructions or texture data wasn't already cached in the GPU.
  • The FX series employed an early crossbar memory controller that favored long, streaming transfers over small, latency sensitive fetches. This design amplified shader stalls, because urgent, short shader memory requests could end up delayed if burst transactions were already in progress.
  • NVIDIA expected drivers and the shader compiler to compensate by aggressively scheduling and prefetching data into the GPU as needed. However, by the time the FX made it to market, developers were increasingly relying on dependent texture reads that used the output of one shader operation as input to another. This created memory access patterns that were difficult for drivers to predict in advance.
  • NVIDIA tried to compensate by making drivers that could replace developer written shaders with ones that had lower image quality. Especially on benchmarks. And yeah, no one liked that solution.
  • Async memory configurations would have relatively worse memory latency than synchronous memory, causing outsized performance issues.

Reply 62 of 64, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

GeForce FX series is essentially a Shader Model 1.4 hardware (and does it pretty well) which can also do Shader Model 2.0, but with massive penalty.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 63 of 64, by agent_x007

User metadata
Rank Oldbie
Rank
Oldbie
douglar wrote on Today, 19:25:

Does this sound like a pretty good summary of what went wrong with the FX line?

NV3x shortcoming articles :
https://web.archive.org/web/20040817214545/ht … doc.aspx?i=2031

As one smart guy once said :

Best way to know what's wrong with current GPUs is to look at changes in next gen.

https://alt.3dcenter.org/artikel/nv40_pipeline/index_e.php

Reply 64 of 64, by Joseph_Joestar

User metadata
Rank l33t++
Rank
l33t++

This thread might be relevant: GeForce 4 vs. GeForce FX?

Personally, I wouldn't bother with a GeForce FX. If the goal is to max out Win9x games with AA/AF added on top, go straight for a Radeon X800 series card and an LGA775 system. The lack of table fog is easily circumvented by dual booting Win9x with WinXP and using Catalyst 7.11 on the latter. And the lack of paletted textures negatively impacts visuals in just five games. The rest of the time, that feature is simply used to improve performance.

PC#1: Pentium MMX 166 / Soyo SY-5BT / S3 Trio64V+ / Voodoo1 / YMF719 / AWE64 Gold / SC-155
PC#2: AthlonXP 2100+ / ECS K7VTA3 / Voodoo3 / Audigy2 / Vortex2
PC#3: Core 2 Duo E8600 / Foxconn P35AX-S / X800 / Audigy2 ZS
PC#4: i5-3570K / MSI Z77A-G43 / GTX 980Ti / X-Fi Titanium