VOGONS


Reply 20 of 56, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++
W.x. wrote on 2023-11-18, 20:52:

For early 2003, it was great budget card, particulary in good configuration.

It was a trash offering, because Radeon 9000/9200 existed. Or even discounted Radeon 9100, which completely mopped the floor with all Nvidia budget cards of that period.

So lets try FX 5200 (128-bit, overclock to 300/300)

Lets don't. It's absolutely unrealistic scenario, when majority or "good" GeForce 5200 had only 5ns memory.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 21 of 56, by W.x.

User metadata
Rank Member
Rank
Member
The Serpent Rider wrote on 2023-11-18, 23:05:

It was a trash offering, because Radeon 9000/9200 existed. Or even discounted Radeon 9100, which completely mopped the floor with all Nvidia budget cards of that period.

So lets try FX 5200 (128-bit, overclock to 300/300)

Lets don't. It's absolutely unrealistic scenario, when majority or "good" GeForce 5200 had only 5ns memory.

Radeons and Ati had their own problems, many people were not fans of their problems, worse drivers, worse control center, worse compatibility. My experience with ATi cards were always terrible, then I've switched to Nvidia, and that was completly different. Never was fan "look at FPS"thing. Not to mentioned, performance of ATi cards because of badly tweaked drivers in time of release, was often struggling, many graphs that you actually seeing now, are with tweaked later drivers. But that wasn't reality of having Radeon 9200 during 2003. I agree, that difference between radeon 9200 and FX5200 is quite big, but I would never go again to Ati cards from experiences I have.

Its true lots of FX5200 cards have bad memory, and you don't see inside box. So it's lottery to get good one. But situation with bad memory on Radeons were even worse. I saw lots of people in discussion, not be able to reach even reference speed. Much more manufacturers used crappy memories on Radeons cards. Out of 6 radeon 9600/9600 pro and 9550 cards, I cannot get reference clocks (300 mhz memory) on any radeon. Even tried 3.6ns Samsung one, but 280 mhz is maximum. I have terrible experiences with Radeon cards, in lowend and mid-range spectrum.

Also, 9200 series were locked. You needed to unlocked it, and most people didn't want to mess with that. To overclock unlocked card, like FX5200, was much easier.

Btw, I have 4 FX5200, and 2 of them have 4ns, and two has 3.6 ns Memory. Two of them can be overclocked to 300 Mhz on memory. That's 50% increase.
I was much frequent on Nvidia cards, Radeons had much often terrible memory. I saw historical discussions and forums, were people were praying to reach even reference memory. Often 325 mhz was refernece, but they've got 250 Mhz stock. And 290Mhz was good result. This scenario I saw very occasionally on Nvidia cards.

Reply 22 of 56, by W.x.

User metadata
Rank Member
Rank
Member

Well, went check the auctions, and first Radeon that I come across
https://cdn.aukro.cz/images/sk1699777238343/g … -176532100.jpeg

the worst possible R9600XT that exists. Samsung LC50 memory. (original should be 300Mhz, so 3.3ns memories)
Well... just saying...

Reply 23 of 56, by Putas

User metadata
Rank Oldbie
Rank
Oldbie
W.x. wrote on 2023-11-19, 16:20:

Out of 6 radeon 9600/9600 pro and 9550 cards, I cannot get reference clocks (300 mhz memory) on any radeon. Even tried 3.6ns Samsung one, but 280 mhz is maximum.

You do know out of those only the 9600 Pro has reference clock 300 MHz?

Reply 24 of 56, by Jo22

User metadata
Rank l33t++
Rank
l33t++

I can't say anything positive about the FX 5200, except that it was affordable as a 3D office card.

When Windows Vista got launched, the FX 5200 was the lowest card to support DirectX 9 and Shader Model 2.
Which was important for Aero Glass.

Because, Vista then could render the desktop as 3D objects on the graphics card, thus lowering the load for the CPU.

Windows XP was 2.5D and different, thus. Here, disabling eye candy did improve performance.

Oh, and on Power Macs, the Geforce FX was supported, too.
It had the ability to handle Quart Extreme, Core Image and OpenGL 2.x!
The first two were needed for all the fancy animations that made OS X Tiger so pretty.

But Windows 98.. I don't know. What I wonder is how well the FX 5200 drivers to support 8-Bit/15-Bit/16-Bit graphics, S3 texture compression, bump mapping and transform&lighting (T&L).
These are among the things that Windows 9x era games may ask for.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 25 of 56, by W.x.

User metadata
Rank Member
Rank
Member
Putas wrote on 2023-11-20, 09:14:
W.x. wrote on 2023-11-19, 16:20:

Out of 6 radeon 9600/9600 pro and 9550 cards, I cannot get reference clocks (300 mhz memory) on any radeon. Even tried 3.6ns Samsung one, but 280 mhz is maximum.

You do know out of those only the 9600 Pro has reference clock 300 MHz?

Yes, I know. I was speaking also about rated speed (of memory chips). Took Gigabyte Radeon 9550 with 4ns Hynix memory, thought I get at least 250 Mhz overclock. To my suprise, even with 4ns memory chips, the maximum overclock was 230 Mhz!!! (so 20 Mhz under rated speed). My experience with Ati card is much worse, and stuff like this happened always on side of Ati, not Nvidia (I admit it exist, but in lesser quantity). That's my personal experience after hundred of graphic cards. The reason for this, I think is, that on Ati part, they've "cheaped" manufacturing process much more. Same stuff happened to me with Radeon 7500, 9100, 8500/8500LE. So problematic to get it even at reference speeds, or in case, it is underclocked (very often), at least get them with overclock. As I said, I am not only one with this problem on radeon cards (of era 2000-2005 , then I think situation started to get gradually better). Of course, problem is , that cheap manufacturers and their versions (like PC Partner/ Sapphire, Powercolor) was much more promiment. Asus cards are in general better in this matter, but their price was always extra. Sometimes, they were even overpriced. Many people bought Sapphire cards because of good price. But they've paid with this problem.
For example, Radeon 9600 series, have very powerful core, in comparation by NV31/34 (FX 5200/5600). But if you block it by memory (5ns, and 200Mhz), fast core will be completly blocked by memory bandwidth, and you won't get much more than slower core (FX5200) with same memory (200Mhz, 128-bit ... so same bandwidth).

Sorry, this is my experience with Ati cards, it is repeating again and again. For example Sapphire Advantage Radeon 9600 Pro, which is most spread Radeon 9600 pro, has stock clocks of memory 222 mhz (instead 300Mhz). And memory on it, are often terrible (you buy 4ns, or 3.6ns, hoping, you FINALLY get reference 300 mhz clocks, but my 4 attempts with 4ns, 3.6ns memories were unsucessfull (always maximum like 270-285Mhz). I still don't have Radeon 9600 pro able to reach reference 300mhz. Only 9600XT (black one 3.3ns memory, but it was much more expensive to get). It had to be same in that time. Black Radeon 9600 pro/xt with 3.3ns memories, had to be more expensive. The problem here was unexperienced users. Not only, they didnt know to tweak anything, or very much through drivers/ and utilities, to bypass overclock protection on Radeon 9200/9600. But also, to know, which card to buy. Because Sapphire Radeon 9600 Pro advantage looked good... box looked good, card looked good, so normal casual user, just bought it, in hope, he get so much boost from Nvidia competition card (for example FX5200/5600). The result in reality for majority users was, he got 222Mhz (instead 300Mhz), so maybe 25-30% drop in performance, only for memory. And then, 6 to 12 months of random iriitating problems, before Ati finally tweaked that drivers.

This is, why I don't see FX5200 FX5600 experience during year 2003 (so real historical experience) not so much worse.
I agree though, that FX 5200 "64-bit" irritating problem, was same severe, and I suspect "FX 5200 worst graphic card" reputation origins from this (when you read historical discussion, you always find most people bashing it, or posting suspicious low results, while minority users cannot understand , why they are bashing it soo much, because they had better experience (and often post almost double FPS benchmark results on their computer). Only minority users alert them in those discussion, that they have probably 64-bit, and underclocked version. The overclocking awareness for casual users, were very low in 2003 (I know it on myself, in that time, I didnt overclock graphic card, and same stuff with 64-bit problem happened to me on MX440-8x from Gainward (terrible manufacturer, cause they marked 64-bit and lowend version by cool names, like Powerpack Pro, but only Golden sample out of all of them was full 128-bit)).
After some years, awareness of "I've got 64-bit version, and underclocked" was more prominent, and more and more users started to understand it, and avoiding it.
But experience of 2003s "FX 5200" remained the same after that time. All that hate remained.

All I am saying, that in good configuration, how Nvidia planned it (128bit memory) plus trying to get at least 4ns Hynix or Samsung or Infenion memory, it was good budget card, that is not behind from other Nvidia lowest solution, but oposite. It is second best (after Geforce 6200).

What is worse, even experts like Pixelpipes on youtube, when first tested FX5200 for historical review, he got 64-bit version. (not marking it as "cut off",he thought it is real FX5200, how it always was) The results were terrible of course, and many users in comment section pulled that "FX5200 worst graphic card in history"again. After he was alterted in comments to get 128-bit version, he made repaired video . But that had of course much less views. There he admited, with about 30% boost in performance, it is starting to be kinda ok lowest graphic card of the generation. And FX5200 have quite good overclock potential, as it is origins from NV31 core, which is meant for FX5600 with 325 mhz core overclock.
So you usually can overclock it to 325Mhz , and even more, so only thing which is important to gain really good boost on FX5200, is to get good memory.
And here is main stuff... from my experience, when you take 10 random Nvidia cards, memories are simply much more often better, than when you take blindly Ati Radeon cards. You also dont have overclock protection problem on FX5200. Overall, good budget card, for early 2003, when you know the stuff, and overclock it.
325/240-250 scenario is almost always realistic on FX5200, but often 270-300Mhz. With these speeds, try to benchmark it with age correct drivers, from early to mid 2003, and you'll see, that doing worse graphic card of all time from FX5200 is not justified. Who screwed it, were cheap manufacturers. Well, all in all, I don't see FX5200 as so bad lowend card, that you have to add notes like "is shouldn't even exists, how it's bad". I'm asking why?
When MX440 was not considered the worst card of all cards, and that it shouldn't had even exist... so FX5200 cannot be.

Reply 26 of 56, by Ydee

User metadata
Rank Oldbie
Rank
Oldbie
W.x. wrote on 2023-11-19, 19:06:
Well, went check the auctions, and first Radeon that I come across https://cdn.aukro.cz/images/sk1699777238343/g … -176532100.j […]
Show full quote

Well, went check the auctions, and first Radeon that I come across
https://cdn.aukro.cz/images/sk1699777238343/g … -176532100.jpeg

the worst possible R9600XT that exists. Samsung LC50 memory. (original should be 300Mhz, so 3.3ns memories)
Well... just saying...

It is not Radeon 9600XT, but chinese fake with another GPU, as you can see here: https://aukro.cz/graficka-karta-sapphire-ati- … -agp-7040988166

Reply 27 of 56, by RandomStranger

User metadata
Rank Oldbie
Rank
Oldbie

I mostly agree with The Serpent Rider. I have a full 128bit FX5200 and even that's soundly beaten by my 128bit MX440 in almost all the games I've tried. Even when the FX performs similarly in average frame rate, frame time and over all gameplay experience is much better on the MX. And again, that's the less abysmal 128bit variant, you don't get that in low profile form factor. The FX5200 is a you win some, you lose some type of card. It plays better with Glide wrappers than older cards, as analog_programmer's youtube comment suggests, but pay for it with the lack of raw performance.

As for the FX5600, it's basically the FX5200 done right. I can recommend that, it generally matches the speed of the Ti4200 and works with the famous 45.23. The 5200 also works with it, the FX5700 is too new. For that one, the sweet spot is 56.64.

I'd say the FX5600 is the lowest FX series card worth using unless you run a really underpowered CPU and it'll be bottlenecked anyway or the most important feature is running Glide wrappers on the cheap. Or maybe for low-profile builds, but imho then it's beaten by the GF4MX which you can actually get in low-profile with the full 128bit bus.

sreq.png retrogamer-s.png

Reply 28 of 56, by Putas

User metadata
Rank Oldbie
Rank
Oldbie
W.x. wrote on 2023-11-20, 10:43:
Putas wrote on 2023-11-20, 09:14:
W.x. wrote on 2023-11-19, 16:20:

Out of 6 radeon 9600/9600 pro and 9550 cards, I cannot get reference clocks (300 mhz memory) on any radeon. Even tried 3.6ns Samsung one, but 280 mhz is maximum.

You do know out of those only the 9600 Pro has reference clock 300 MHz?

Yes, I know. I was speaking also about rated speed (of memory chips). Took Gigabyte Radeon 9550 with 4ns Hynix memory, thought I get at least 250 Mhz overclock. To my suprise, even with 4ns memory chips, the maximum overclock was 230 Mhz!!! (so 20 Mhz under rated speed).
...

Which is still fine for the card. Just because a memory is rated for some speed does not mean it was implemented that way. Probably they did not supply enough voltage for that speed.

Reply 29 of 56, by analog_programmer

User metadata
Rank Oldbie
Rank
Oldbie

W.x., to achieve memory chip's rated speeds you have to use rated memory timings according to max. rated memory chip's frequency in videocard's BIOS, as usually on lower speeds these are tighten. The memory chips are not cr*p, but factory BIOS settings usually are.

from СМ630 to Ryzen gen. 3
engineer's five pennies: this world goes south since everything's run by financiers and economists
this isn't voice chat, yet some people, overusing online communications, "talk" and "hear voices"

Reply 30 of 56, by RandomStranger

User metadata
Rank Oldbie
Rank
Oldbie

It's not that the factory BIOS is cr*p, it's just fine tuned to the intended performance level with not much left in the tank. Hardware usually aren't meant to be overclocked even in the instances where they happened to be good overclockers. If they are, then it's listed as a feature (dual BIOS, unlocked multiplier, etc.).

sreq.png retrogamer-s.png

Reply 31 of 56, by analog_programmer

User metadata
Rank Oldbie
Rank
Oldbie
RandomStranger wrote on 2023-11-21, 06:30:

It's not that the factory BIOS is cr*p, it's just fine tuned to the intended performance level with not much left in the tank. Hardware usually aren't meant to be overclocked even in the instances where they happened to be good overclockers. If they are, then it's listed as a feature (dual BIOS, unlocked multiplier, etc.).

Sorry, but when manufacturer cuts out the possibility of hardware to work with its nominal parameters (in this case by BIOS settings for memory timings) I can't call it anything but cr*p. Then what's the point to use fast memory chips (i.e. DDR2 600) and run them at slower speed (same DDR2 600 chips at DDR2 300 speed with tight memory timings)? The answer is: It's marketing strategy for videocard segments like low-end, low-mid, mid, high-end, "enthusiast", whatever.

For example I have an Radeon 9550 videoacard with 3.3 ns GDDR Samsung K4D261638F-TC33 chips ant they can't run at their nominal speed of 300MHz (for GDDR 600 with 3.3 ns chips) because they're factory set in BIOS to run at 200 MHz with tighter timings for 200 MHz frequency. But when I set the memory timings (using RaBiT) according to chip's datasheet for 300 MHz there's no more problems with memory to run at its nominal 300 MHz frequency. And I don't call this "overclock" because the memory frequency and timings for these 3.3 ns chips are still within their factory specs. And yes, this already messes with the Radeon 9600 Pro/XT niche "referent design" for 300 MHz memory, so the shysters in the marketing department won't be happy.

from СМ630 to Ryzen gen. 3
engineer's five pennies: this world goes south since everything's run by financiers and economists
this isn't voice chat, yet some people, overusing online communications, "talk" and "hear voices"

Reply 32 of 56, by RandomStranger

User metadata
Rank Oldbie
Rank
Oldbie
analog_programmer wrote on 2023-11-21, 06:47:

Then whats the point to use fast memory chips (i.e. DDR2 600Mhz) and run them at slower speed (same DDR2 600 at DDR2 300 speed with tight memory timings)?

Maybe it's cheaper to buy larger quantity of a single type of memory than smaller quantities of different types?
Maybe they can run them cooler at a lower voltage?
Maybe they can go cheaper on other parts without risking stability?

There could be all kinds of reason. The only fact is that they were meant to use as is.

sreq.png retrogamer-s.png

Reply 33 of 56, by analog_programmer

User metadata
Rank Oldbie
Rank
Oldbie
RandomStranger wrote on 2023-11-21, 07:38:
Maybe it's cheaper to buy larger quantity of a single type of memory than smaller quantities of different types? Maybe they can […]
Show full quote

Maybe it's cheaper to buy larger quantity of a single type of memory than smaller quantities of different types?
Maybe they can run them cooler at a lower voltage?
Maybe they can go cheaper on other parts without risking stability?

There could be all kinds of reason. The only fact is that they were meant to use as is.

In my own example with Radeon 9550 the memory voltage is one and the same for both factory and "overclocked" speeds and within specs. And there's no need of passive or active cooling for memory chips. The question was rhetorical and I gave you an answer - it's more of a marketing strategy.

from СМ630 to Ryzen gen. 3
engineer's five pennies: this world goes south since everything's run by financiers and economists
this isn't voice chat, yet some people, overusing online communications, "talk" and "hear voices"

Reply 34 of 56, by RandomStranger

User metadata
Rank Oldbie
Rank
Oldbie
analog_programmer wrote on 2023-11-21, 07:47:

it's more of a marketing strategy.

Of course it is. And it's still being done on both sides.
The only reason the for the RTX4060 or RX6600 to have fewer shader units than the Ti/XT variant. It's the same GPU. In the late Core2 era Intel manufactured only one or two quad core desktop CPUs and they disabled parts of the cache and some of the cores for the lower market segments. Sometimes they were faulty and couldn't reach the specs of the flagship model, but as time went it grew to be rare. Same with Phenom II. They were famous for being able to unlock the disabled CPU cores and maybe some Athlons could unlock the L3 cache, but I'm not sure about that.

It's just makes business sense to serve all price ranges with the smallest amount of product while blocking easy access to free upgrades.

sreq.png retrogamer-s.png

Reply 35 of 56, by analog_programmer

User metadata
Rank Oldbie
Rank
Oldbie
RandomStranger wrote on 2023-11-21, 09:48:
Of course it is. And it's still being done on both sides. The only reason the for the RTX4060 or RX6600 to have fewer shader uni […]
Show full quote
analog_programmer wrote on 2023-11-21, 07:47:

it's more of a marketing strategy.

Of course it is. And it's still being done on both sides.
The only reason the for the RTX4060 or RX6600 to have fewer shader units than the Ti/XT variant. It's the same GPU. In the late Core2 era Intel manufactured only one or two quad core desktop CPUs and they disabled parts of the cache and some of the cores for the lower market segments. Sometimes they were faulty and couldn't reach the specs of the flagship model, but as time went it grew to be rare. Same with Phenom II. They were famous for being able to unlock the disabled CPU cores and maybe some Athlons could unlock the L3 cache, but I'm not sure about that.

It's just makes business sense to serve all price ranges with the smallest amount of product while blocking easy access to free upgrades.

I totally agree.

I just tried to explain why it sucks manufacturers to intentionally cripple good components (fast RAM chips) through software settings (BIOS) and how we can solve such limitations when we have the right tools. Of course, when things are intentionally crippled at hardware level, we can't do much in most of the cases.

from СМ630 to Ryzen gen. 3
engineer's five pennies: this world goes south since everything's run by financiers and economists
this isn't voice chat, yet some people, overusing online communications, "talk" and "hear voices"

Reply 36 of 56, by rcarkk

User metadata
Rank Member
Rank
Member
agent_x007 wrote on 2023-11-18, 14:02:
Have a 3DMark 01SE roundup of FX Ultra family (core/memory clock matched), tested on AM2NF3 board with Phenom II x4 : https://w […]
Show full quote

Have a 3DMark 01SE roundup of FX Ultra family (core/memory clock matched), tested on AM2NF3 board with Phenom II x4 :
file.php?id=178766&mode=view

Nice benchmarks. What drivers have you used for the nvidias, particularly the FX 5950U?

Baby AT socket7 - Pentium MMX 233MHz + 3Dfx Voodoo
Socket 8 build - Soyo 6FA + Pentium Pro 200MHz + 3Dfx Voodoo 2 12MB
PC Remake - Pentium III 450 + Matrox G400 16MB
The K6-III build

Reply 37 of 56, by agent_x007

User metadata
Rank Oldbie
Rank
Oldbie
rcarkk wrote on 2023-11-23, 14:46:

Nice benchmarks. What drivers have you used for the nvidias, particularly the FX 5950U?

Driver 93.71, all cards were tested under Windows XP (I forgot to mentioned that in my first post).

Reply 38 of 56, by W.x.

User metadata
Rank Member
Rank
Member
analog_programmer wrote on 2023-11-21, 06:47:

For example I have an Radeon 9550 videoacard with 3.3 ns GDDR Samsung K4D261638F-TC33 chips ant they can't run at their nominal speed of 300MHz (for GDDR 600 with 3.3 ns chips) because they're factory set in BIOS to run at 200 MHz with tighter timings for 200 MHz frequency. But when I set the memory timings (using RaBiT) according to chip's datasheet for 300 MHz there's no more problems with memory to run at its nominal 300 MHz frequency. And I don't call this "overclock" because the memory frequency and timings for these 3.3 ns chips are still within their factory specs. And yes, this already messes with the Radeon 9600 Pro/XT niche "referent design" for 300 MHz memory, so the shysters in the marketing department won't be happy.

That's good idea, I'll definitely try it.

Reply 39 of 56, by nuvyi

User metadata
Rank Newbie
Rank
Newbie
RandomStranger wrote on 2023-11-20, 11:56:

I'd say the FX5600 is the lowest FX series card worth using unless you run a really underpowered CPU and it'll be bottlenecked anyway or the most important feature is running Glide wrappers on the cheap. Or maybe for low-profile builds, but imho then it's beaten by the GF4MX which you can actually get in low-profile with the full 128bit bus.

GF4MX (128bit) faster than FX5600?? How?