VOGONS


Reply 60 of 83, by shevalier

User metadata
Rank Oldbie
Rank
Oldbie
appiah4 wrote on 2026-02-10, 15:07:

I own an FX 5800 Ultra and I have absolutely NO positive impression of the FX series.

The core-to-DDR memory frequency ratio for the 5700 is 425:250 = 1.7 (1.35 for the 5500 and 1.2 for the 5600).
Try lowering the DDR2 memory frequency on your 5800 Ultra to 200 MHz (from the original 250, which is synchronous with the core frequency).
Your “NO positive impression” will simply turn into rage.

Aopen MX3S, PIII-S Tualatin 1133, Radeon 9800Pro@XT BIOS, Audigy 4 SB0610
JetWay K8T8AS, Athlon DH-E6 3000+, Radeon HD2600Pro AGP, Audigy 2 Value SB0400
Gigabyte Ga-k8n51gmf, Turion64 ML-30@2.2GHz , Radeon X800GTO PL16, Diamond monster sound MX300

Reply 61 of 83, by douglar

User metadata
Rank l33t
Rank
l33t
shevalier wrote on 2026-02-10, 18:01:

Your “NO positive impression” will simply turn into rage.

So the async memory clocks probably hurt memory latency, yes?

Does this sound like a pretty good summary of what went wrong with the FX line?

  • The FX series used much deeper shader execution pipelines than were common at the time. Execution could halt for many clock cycles if required instructions or texture data wasn't already cached in the GPU.
  • The FX series employed an early crossbar memory controller that favored long, streaming transfers over small, latency sensitive fetches. This design amplified shader stalls, because urgent, short shader memory requests could end up delayed if burst transactions were already in progress.
  • NVIDIA expected drivers and the shader compiler to compensate by aggressively scheduling and prefetching data into the GPU as needed. However, by the time the FX made it to market, developers were increasingly relying on dependent texture reads that used the output of one shader operation as input to another. This created memory access patterns that were difficult for drivers to predict in advance.
  • NVIDIA tried to compensate by making drivers that could replace developer written shaders with ones that had lower image quality. Especially on benchmarks. And yeah, no one liked that solution.
  • Async memory configurations would have relatively worse memory latency than synchronous memory, causing outsized performance issues.

Reply 62 of 83, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

GeForce FX series is essentially a Shader Model 1.4 hardware (and does it pretty well) which can also do Shader Model 2.0, but with massive penalty.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 63 of 83, by agent_x007

User metadata
Rank Oldbie
Rank
Oldbie
douglar wrote on 2026-02-10, 19:25:

Does this sound like a pretty good summary of what went wrong with the FX line?

NV3x shortcoming articles :
https://web.archive.org/web/20040817214545/ht … doc.aspx?i=2031

As one smart guy once said :

Best way to know what's wrong with current GPUs is to look at changes in next gen.

https://alt.3dcenter.org/artikel/nv40_pipeline/index_e.php

Reply 64 of 83, by Joseph_Joestar

User metadata
Rank l33t++
Rank
l33t++

This thread might be relevant: GeForce 4 vs. GeForce FX?

Personally, I wouldn't bother with a GeForce FX. If the goal is to max out Win9x games with AA/AF added on top, go straight for a Radeon X800 series card and an LGA775 system. The lack of table fog is easily circumvented by dual booting Win9x with WinXP and using Catalyst 7.11 on the latter. And the lack of paletted textures negatively impacts visuals in just five games. The rest of the time, that feature is simply used to improve performance.

My retro builds

Reply 65 of 83, by shevalier

User metadata
Rank Oldbie
Rank
Oldbie
douglar wrote on 2026-02-10, 19:25:
shevalier wrote on 2026-02-10, 18:01:

Your “NO positive impression” will simply turn into rage.

So the async memory clocks probably hurt memory latency, yes?

I don't know, maybe there just isn't enough bandwidth to download data at this video core frequency.

I would simply suggest comparing comparable cards by market segment.
Rather than “why is my FX5200 slower than the GF4Ti 4800?”
GF3 Ti 500= 4Ti 4800 = FX5950 Ultra
.......................................FX5900 Ultra
....................4Ti 4600 = FX5800 Ultra
GF3 ..........= 4Ti 4400= FX5700 Ultra
....................................................Le, Ve, etc
GF3 Ti 200=4Ti 4200= FX5600 Ultra
2MX(?) .... = 4MX460= FX5200 Ultra
............................440= FX5500
............................400= FX5200
As far as I understand, the ratio of segments to video cards should be as follows.

Aopen MX3S, PIII-S Tualatin 1133, Radeon 9800Pro@XT BIOS, Audigy 4 SB0610
JetWay K8T8AS, Athlon DH-E6 3000+, Radeon HD2600Pro AGP, Audigy 2 Value SB0400
Gigabyte Ga-k8n51gmf, Turion64 ML-30@2.2GHz , Radeon X800GTO PL16, Diamond monster sound MX300

Reply 66 of 83, by appiah4

User metadata
Rank l33t++
Rank
l33t++
shevalier wrote on 2026-02-11, 06:31:
I don't know, maybe there just isn't enough bandwidth to download data at this video core frequency. […]
Show full quote
douglar wrote on 2026-02-10, 19:25:
shevalier wrote on 2026-02-10, 18:01:

Your “NO positive impression” will simply turn into rage.

So the async memory clocks probably hurt memory latency, yes?

I don't know, maybe there just isn't enough bandwidth to download data at this video core frequency.

I would simply suggest comparing comparable cards by market segment.
Rather than “why is my FX5200 slower than the GF4Ti 4800?”
GF3 Ti 500= 4Ti 4800 = FX5950 Ultra
.......................................FX5900 Ultra
....................4Ti 4600 = FX5800 Ultra
GF3 ..........= 4Ti 4400= FX5700 Ultra
....................................................Le, Ve, etc
GF3 Ti 200=4Ti 4200= FX5600 Ultra
2MX(?) .... = 4MX460= FX5200 Ultra
............................440= FX5500
............................400= FX5200
As far as I understand, the ratio of segments to video cards should be as follows.

There most certainly is not as big a performance spread between something like a GF3 Ti500 and a Ti200 as there is between an FX5950 Ultra and FX5600 Ultra. Like.. That comparison is not valid in any universe that might exist. More Realistically GF3 = FX5800 Ultra, GF3Ti200/500=FX5900/5950 Ultra and the GF2MX line is FX5200/5500.

Reply 67 of 83, by shevalier

User metadata
Rank Oldbie
Rank
Oldbie
appiah4 wrote on 2026-02-11, 07:48:
shevalier wrote on 2026-02-11, 06:31:
I don't know, maybe there just isn't enough bandwidth to download data at this video core frequency. […]
Show full quote
douglar wrote on 2026-02-10, 19:25:

So the async memory clocks probably hurt memory latency, yes?

I don't know, maybe there just isn't enough bandwidth to download data at this video core frequency.

I would simply suggest comparing comparable cards by market segment.
Rather than “why is my FX5200 slower than the GF4Ti 4800?”
GF3 Ti 500= 4Ti 4800 = FX5950 Ultra
.......................................FX5900 Ultra
....................4Ti 4600 = FX5800 Ultra
GF3 ..........= 4Ti 4400= FX5700 Ultra
....................................................Le, Ve, etc
GF3 Ti 200=4Ti 4200= FX5600 Ultra
2MX(?) .... = 4MX460= FX5200 Ultra
............................440= FX5500
............................400= FX5200
As far as I understand, the ratio of segments to video cards should be as follows.

There most certainly is not as big a performance spread between something like a GF3 Ti500 and a Ti200 as there is between an FX5950 Ultra and FX5600 Ultra.

I did not say a word about ratio of performance between marketing segments.
3Ti500 - Top, 4Ti4800 - the new top (as and FX59x0)
3Ti450(500-50, a non-existent old_top graphics card)- 4Ti4600/FX5800Ultra - the old_top
______
Performance level
______
Entry level
_______
Crap level

Aopen MX3S, PIII-S Tualatin 1133, Radeon 9800Pro@XT BIOS, Audigy 4 SB0610
JetWay K8T8AS, Athlon DH-E6 3000+, Radeon HD2600Pro AGP, Audigy 2 Value SB0400
Gigabyte Ga-k8n51gmf, Turion64 ML-30@2.2GHz , Radeon X800GTO PL16, Diamond monster sound MX300

Reply 68 of 83, by appiah4

User metadata
Rank l33t++
Rank
l33t++

That is not how market segments work. There was not enough product differentiation wrt price or performance between the three GF3 cards to classify any of them as belonging to a different market segment. GF3 was a top tier card at its release. It was then replaced by Ti200/500 both of which were top tier cards at their release. The entire GF4 Ti range, from 4200 to 4600 were top tier cards at release. The mid tier cards of that era were either cards of the year before or from a second tier vendor like Matrox, PowerVR etc. You are trying to analyze a period through the market segments of today that did not exist back then. This kind of segmentation started with the GeForce FX and Radeon 9000 Series of cards. The market was very different. It was amazing. I miss it...

Reply 69 of 83, by havli

User metadata
Rank Oldbie
Rank
Oldbie

Just checked my old benchmark results https://hw-museum.cz/article/13/the-ultimate- … 2001---2005-/22

And I can't see any evidence of FX ultra having any significant performance boost over the non-Ultra that could be accounted to core/memory frequency ratio. For example FX 5900 XT clocked at 390/700 vs FX 5950 Ultra clocked at 475/950. The performance difference is perfectly in line with the higher clockspeed of the Ultra. +35% memory bandwidth and +22% GPU power.... and on average there is 20-30% difference, as expected. In some cases it is more (especially high resolution with AA), but very likely caused by 128 vs 256 MB of RAM.

HW museum.cz - my collection of PC hardware

Reply 70 of 83, by agent_x007

User metadata
Rank Oldbie
Rank
Oldbie
bartonxp wrote on 2026-02-09, 22:01:

I think the best Win98 cards were the ATI X850XT Platinum and the GeForce 6800 GTX Ultra.

And it's funny you call this math, what a waste of time.

It's good that you have your own opinion on something.

As for math thing : If I use "%" and "=" symbols, I can't exactly call that English now can I 😜
@havli confirmed what I wanted to say either way (I just didn't included CPU/platform as variable, but it always bottlenecks max. performance in some way).

Reply 71 of 83, by shevalier

User metadata
Rank Oldbie
Rank
Oldbie
havli wrote on 2026-02-11, 09:27:

Just checked my old benchmark results https://hw-museum.cz/article/13/the-ultimate- … 2001---2005-/22

And I can't see any evidence of FX ultra having any significant performance boost over the non-Ultra that could be accounted to core/memory frequency ratio. For example FX 5900 XT clocked at 390/700 vs FX 5950 Ultra clocked at 475/950. The performance difference is perfectly in line with the higher clockspeed of the Ultra. +35% memory bandwidth and +22% GPU power.... and on average there is 20-30% difference, as expected. In some cases it is more (especially high resolution with AA), but very likely caused by 128 vs 256 MB of RAM.

As I mentioned above, not only have I never owned a 59x0 graphics card, but I have never even held one in my hands.
I showed the results for the 5200, and the 5700 behaves exactly the same way.
Your results may be related to both the differences in memory width (256 vs. 128) and the fact that the memory frequencies are so high that they are simply sufficient.

Aopen MX3S, PIII-S Tualatin 1133, Radeon 9800Pro@XT BIOS, Audigy 4 SB0610
JetWay K8T8AS, Athlon DH-E6 3000+, Radeon HD2600Pro AGP, Audigy 2 Value SB0400
Gigabyte Ga-k8n51gmf, Turion64 ML-30@2.2GHz , Radeon X800GTO PL16, Diamond monster sound MX300

Reply 72 of 83, by predator_085

User metadata
Rank Member
Rank
Member
Joseph_Joestar wrote on 2026-02-10, 21:07:

This thread might be relevant: GeForce 4 vs. GeForce FX?

Personally, I wouldn't bother with a GeForce FX. If the goal is to max out Win9x games with AA/AF added on top, go straight for a Radeon X800 series card and an LGA775 system. The lack of table fog is easily circumvented by dual booting Win9x with WinXP and using Catalyst 7.11 on the latter. And the lack of paletted textures negatively impacts visuals in just five games. The rest of the time, that feature is simply used to improve performance.

Thanks a lot for the link. It was a very interesting read, and I have to agree. For my current setup with the Tualatin Celeron, the Ti 4200 is more than enough. It is already a very good choice. If I really crave an ultra-fast Windows 98 SE gaming rig, I would need to build a completely different and futuristic machine by Win98 SE standards. Either one of the last single-core CPUs in the Athlon 64 or Pentium 4 range, or maybe even an early dual-core machine.

Reply 73 of 83, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

As someone who actually overclocked NV35, I can tell that it scales a lot better from core and memory bandwidth is much less important. That's why FX5900XT was such a steal back then.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 74 of 83, by douglar

User metadata
Rank l33t
Rank
l33t
The Serpent Rider wrote on 2026-02-11, 12:04:

As someone who actually overclocked NV35, I can tell that it scales a lot better from core and memory bandwidth is much less important. That's why FX5900XT was such a steal back then.

Sounds like the NV35 models have sufficient memory bandwidth for the GPU with some headroom but the lower end chips NV31 & NV34 are often memory bandwidth limited.

Reply 75 of 83, by Ozzuneoj

User metadata
Rank l33t
Rank
l33t
douglar wrote on 2026-02-11, 14:38:
The Serpent Rider wrote on 2026-02-11, 12:04:

As someone who actually overclocked NV35, I can tell that it scales a lot better from core and memory bandwidth is much less important. That's why FX5900XT was such a steal back then.

Sounds like the NV35 models have sufficient memory bandwidth for the GPU with some headroom but the lower end chips NV31 & NV34 are often memory bandwidth limited.

The 256bit memory bus probably made a huge difference. Even the lowest models (5900XT or ZT) had 22.4GB/sec bandwidth compared to the 5800 Ultra's 16GB/sec.

It's kind of crazy just how far off the mark Nvidia was with the original NV30, and the 128bit bus is a good example of that. You have to wonder how it would have fared if they had done nothing different other than design it from the start with a 256bit memory bus like NV35.

Clearly the FX series was way behind ATi with regard to DX9 performance, but if it had done better at the start when DX9 wasn't as big of a deal yet, it could have saved them a lot of bad press. Still... they came back super strong with the 6800 series very soon after, so they clearly knew by the time the FX series hit the market that the industry was going a different direction.

Now for some blitting from the back buffer.

Reply 76 of 83, by havli

User metadata
Rank Oldbie
Rank
Oldbie
shevalier wrote on 2026-02-11, 11:46:

As I mentioned above, not only have I never owned a 59x0 graphics card, but I have never even held one in my hands.
I showed the results for the 5200, and the 5700 behaves exactly the same way.
Your results may be related to both the differences in memory width (256 vs. 128) and the fact that the memory frequencies are so high that they are simply sufficient.

Well, I am watching your FX 5500 results now... and still can't see why 1:1 ratio would be that significant.
All I see here is that FX 5500 (and possibly 5600/5700) are significantly bottlenecked by memory frequency. So when you increase memory speed, performance raises also. But whether you run GPU/MEM exactly 300/300 MHz or 300/290, there will be very little difference - less than 3% most likely. Or 300/310 for that matter - should be faster than 300/300 by small amount. In short - faster memory = better for these low/mid FX cards... but there are no golden ratios I dare to say.

HW museum.cz - my collection of PC hardware

Reply 77 of 83, by vintageonthemoon

User metadata
Rank Newbie
Rank
Newbie
predator_085 wrote on 2026-02-03, 11:18:

All in All I am happy with my asus tusl2-c mainboard with a tualtin celeron 1,3 mhz cpu running with geforce 4 4200. I am quite happy with the results. But i am consodering the max the system with getting a tulatin pentium 3 and maybe a better card.

Which brings me to the question if upgrading from gf 4 to gf fx would do anything for my mainboard/chip in the win98se gaming realm or is the extra power usefl for win98se. I coul get gf fx 5500 at a decent price.

it's really depends on what you're looking for, for win 98 build both gf4 and FX cards (later ones) are excellent for that build, only downside is the FX cards is the performance, the shader and texture quality is reduced by comparison to the GF4 TI 4xxx, the FX 5200 128-bit version is not terrible, it's on-par with geforce 2 gts performance, the 5500 is just an overclocked 5200, 5600 ultra is closer to basic geforce 4 ti 4200 levels, honestly it's nice to have mid-range card stock clock-speed then go with more powerful card that tends to die quicker due to more power and extra heat, with 20+ year old cards, the age can be a big problem for these type cards: vram chips can get corrupted (bit-rot), ageing/failing capacitors and fan/heatsink need to be properly cleaned or replaced. i already replaced 3 cards heatsink/fans and some recapping.

I also have a p3 Tualatin rig, with P3 1.26ghz, 512mb of sdram, that I’m very happy with it, and geforce 4 ti 4200 128-bit is perfect for that rig, very good support for older titles and DOS using Direct X 8.2 due to some programs (like daemon tools) don't run very well under DX 7 and runs very smoothly using force-ware 31.40 drivers, the 45.23 drivers is a little finicky under win 98.
the only real use for the FX cards in my opinion is for use of DX9 (despite how poorly it was handeld) for n-glide and dgvoodoo support you can play games with glide support and to save you a lot of headache and money on getting expensive 3dfx voodoo cards.

also ATI Radeon 9xxx cards are also excellent and much cheaper by comparison to the geforce ti cards, I have 9200, 9500, 9600xt (the 9600xt is my backup card in case my gf4 card is failing), aside from lacking legacy features like table fog and 8-bit palletized textures, there very good for win98, also have dx9 support, some games actually look and play better on radeon then geforce, the image quality on those radeon cards is much sharper and cleaner then the geforce.

Reply 78 of 83, by bartonxp

User metadata
Rank Member
Rank
Member

I doubt he/she/they are coming back to their thread. It's been VOGONized!!!!

Reply 79 of 83, by predator_085

User metadata
Rank Member
Rank
Member
vintageonthemoon wrote on 2026-02-11, 20:36:
it's really depends on what you're looking for, for win 98 build both gf4 and FX cards (later ones) are excellent for that build […]
Show full quote
predator_085 wrote on 2026-02-03, 11:18:

All in All I am happy with my asus tusl2-c mainboard with a tualtin celeron 1,3 mhz cpu running with geforce 4 4200. I am quite happy with the results. But i am consodering the max the system with getting a tulatin pentium 3 and maybe a better card.

Which brings me to the question if upgrading from gf 4 to gf fx would do anything for my mainboard/chip in the win98se gaming realm or is the extra power usefl for win98se. I coul get gf fx 5500 at a decent price.

it's really depends on what you're looking for, for win 98 build both gf4 and FX cards (later ones) are excellent for that build, only downside is the FX cards is the performance, the shader and texture quality is reduced by comparison to the GF4 TI 4xxx, the FX 5200 128-bit version is not terrible, it's on-par with geforce 2 gts performance, the 5500 is just an overclocked 5200, 5600 ultra is closer to basic geforce 4 ti 4200 levels, honestly it's nice to have mid-range card stock clock-speed then go with more powerful card that tends to die quicker due to more power and extra heat, with 20+ year old cards, the age can be a big problem for these type cards: vram chips can get corrupted (bit-rot), ageing/failing capacitors and fan/heatsink need to be properly cleaned or replaced. i already replaced 3 cards heatsink/fans and some recapping.

I also have a p3 Tualatin rig, with P3 1.26ghz, 512mb of sdram, that I’m very happy with it, and geforce 4 ti 4200 128-bit is perfect for that rig, very good support for older titles and DOS using Direct X 8.2 due to some programs (like daemon tools) don't run very well under DX 7 and runs very smoothly using force-ware 31.40 drivers, the 45.23 drivers is a little finicky under win 98.
the only real use for the FX cards in my opinion is for use of DX9 (despite how poorly it was handeld) for n-glide and dgvoodoo support you can play games with glide support and to save you a lot of headache and money on getting expensive 3dfx voodoo cards.

also ATI Radeon 9xxx cards are also excellent and much cheaper by comparison to the geforce ti cards, I have 9200, 9500, 9600xt (the 9600xt is my backup card in case my gf4 card is failing), aside from lacking legacy features like table fog and 8-bit palletized textures, there very good for win98, also have dx9 support, some games actually look and play better on radeon then geforce, the image quality on those radeon cards is much sharper and cleaner then the geforce.

Thanks for your detailed reply and the warning about the potential drawbacks of more powerful cards.

What I want is rather easy. I want to play win98se games with high settings in 800x600 or 1024x768 in high settings

In that regard the ti 4200 is good but I wanted to find out if I could max out my system even more.

My main field of use is directx8. For dx 9 games my tulatin celeron is not ideal.

I also have no use for dvgoodoo. I have a real voodoo rig. A coppermine p3 with a voodoo 3 2000agp.

well it is not that bad

bartonxp wrote on 2026-02-12, 01:27:

I doubt he/she/they are coming back to their thread. It's been VOGONized!!!!

here I am. But you have a point. The thread moved miles away from the original topic. But never mind. The question has been answered on the first page already. Like often in the retro field th answer is it depends. The FX cards might have some use but if I want to super rig or win98se I need to move more modern rigs.