VOGONS


upgrade from gf4 to gf fx rational or waste of money and time.

Topic actions

Reply 60 of 61, by shevalier

User metadata
Rank Oldbie
Rank
Oldbie
appiah4 wrote on Today, 15:07:

I own an FX 5800 Ultra and I have absolutely NO positive impression of the FX series.

The core-to-DDR memory frequency ratio for the 5700 is 425:250 = 1.7 (1.35 for the 5500 and 1.2 for the 5600).
Try lowering the DDR2 memory frequency on your 5800 Ultra to 200 MHz (from the original 250, which is synchronous with the core frequency).
Your “NO positive impression” will simply turn into rage.

Aopen MX3S, PIII-S Tualatin 1133, Radeon 9800Pro@XT BIOS, Audigy 4 SB0610
JetWay K8T8AS, Athlon DH-E6 3000+, Radeon HD2600Pro AGP, Audigy 2 Value SB0400
Gigabyte Ga-k8n51gmf, Turion64 ML-30@2.2GHz , Radeon X800GTO PL16, Diamond monster sound MX300

Reply 61 of 61, by douglar

User metadata
Rank l33t
Rank
l33t
shevalier wrote on Today, 18:01:

Your “NO positive impression” will simply turn into rage.

So the async memory clocks probably hurt memory latency, yes?

Does this sound like a pretty good summary of what went wrong with the FX line?

  • The FX series used much deeper shader execution pipelines than were common at the time. Execution could halt for many clock cycles if required instructions or texture data wasn't already cached in the GPU.
  • The FX series employed an early crossbar memory controller that favored long, streaming transfers over small, latency sensitive fetches. This design amplified shader stalls, because urgent, short shader memory requests could end up delayed if burst transactions were already in progress.
  • NVIDIA expected drivers and the shader compiler to compensate by aggressively scheduling and prefetching data into the GPU as needed. However, by the time the FX made it to market, developers were increasingly relying on dependent texture reads that used the output of one shader operation as input to another. This created memory access patterns that were difficult for drivers to predict in advance.
  • NVIDIA tried to compensate by making drivers that could replace developer written shaders with ones that had lower image quality. Especially on benchmarks. And yeah, no one liked that solution.
  • Async memory configurations would have relatively worse memory latency than synchronous memory, causing outsized performance issues.