VOGONS


Reply 40 of 56, by RandomStranger

User metadata
Rank Oldbie
Rank
Oldbie
nuvyi wrote on 2023-11-27, 01:10:
RandomStranger wrote on 2023-11-20, 11:56:

I'd say the FX5600 is the lowest FX series card worth using unless you run a really underpowered CPU and it'll be bottlenecked anyway or the most important feature is running Glide wrappers on the cheap. Or maybe for low-profile builds, but imho then it's beaten by the GF4MX which you can actually get in low-profile with the full 128bit bus.

GF4MX (128bit) faster than FX5600?? How?

No. It's faster than the FX5200.

sreq.png retrogamer-s.png

Reply 41 of 56, by analog_programmer

User metadata
Rank Oldbie
Rank
Oldbie
W.x. wrote on 2023-11-26, 12:24:

That's good idea, I'll definitely try it.

I'm awaiting your feedback after you try to set the proper memory timings for the higher frequencies.

P.S. I have no idea how to manually edit memory timings in nVidia BIOSes with tools like NiBiTor as they're presented in some weird strings with hex-values. Does anyone knows how to do this?

Last edited by analog_programmer on 2023-11-28, 06:23. Edited 1 time in total.

from СМ630 to Ryzen gen. 3
engineer's five pennies: this world goes south since everything's run by financiers and economists
this isn't voice chat, yet some people, overusing online communications, "talk" and "hear voices"

Reply 42 of 56, by gerry

User metadata
Rank Oldbie
Rank
Oldbie

not sure on x5600 but interesting to read (again) about how awful the fx5200 was (or could be), its one of those things that doesn't really matter in retrospect

it may well have been poor in relation to some contemporary rival, but now it is cheap and relatively still available and paired with most win 98 systems would give ample support for all kinds of games in the late 90'd through early 2000's just fine

i had one gifted to me years ago when it was already behind the times and used it on an athlon xp to play all sorts of games

so in answer - depends what you mean by great, is it great paired with a P3/4 or athlon XP for playing Quake 3, sure it is! Is it the greatest - probably not, but does it really matter?

Reply 43 of 56, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

That's actually even more relevant now, because prices for FX5200 and FX5600 on average are the same. Regular FX 5200 lacks both performance of higher tiered FX cards and better compatibility/better driver overhead of GeForce 4 MX or older cards. Keep in mind that GeForce 4 MX 440 64-bit (typical low-profile) > GeForce FX 5200 64-bit.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 44 of 56, by Takedasun

User metadata
Rank Newbie
Rank
Newbie

Some time ago I did a video card comparison in Need for Speed: Hot Pursuit 2

Hot Pursuit 2.png
Filename
Hot Pursuit 2.png
File size
73.68 KiB
Views
950 views
File license
Public domain

Reply 46 of 56, by W.x.

User metadata
Rank Member
Rank
Member
The Serpent Rider wrote on 2023-11-28, 03:46:

Keep in mind that GeForce 4 MX 440 64-bit (typical low-profile) > GeForce FX 5200 64-bit.

From his graph, FX 5200 64-bit > MX440 64-bit. 22 fps vs 18 fps. And MX440 64-bit has even advantage of 275Mhz core clock. But in this case, it's completly bottlenecked by memory ,so that overclock on core doesn't help. Still, can be seen, FX5200 is better card, when bottlenecked on memory.

Reply 47 of 56, by ultra

User metadata
Rank Newbie
Rank
Newbie

FX5600 closer to the GeForce 3 Ti200 according to the graph, leave alone GeForce4 Ti 4200.. It feels like the FX5600 is just an overclocked version of FX5500. The 5600 Ultra must be catching up with the Ti 4200?

Last edited by ultra on 2023-11-29, 08:33. Edited 2 times in total.

Reply 48 of 56, by analog_programmer

User metadata
Rank Oldbie
Rank
Oldbie
ultra wrote on 2023-11-29, 03:02:

It feels like the FX5600 is just an overclocked version of FX5500.

"Feels like", but their GPU-cores are different (NV31 and NV34) and you can't judge for their performance by just one game test. FX5500 is newer revision of FX5200. Radeon 9xxx videocards were better than FX5xxx series, but obviously this particular game is optimized for nV cards.

from СМ630 to Ryzen gen. 3
engineer's five pennies: this world goes south since everything's run by financiers and economists
this isn't voice chat, yet some people, overusing online communications, "talk" and "hear voices"

Reply 49 of 56, by Takedasun

User metadata
Rank Newbie
Rank
Newbie

Memory timings have a big impact on performance.

PROLINK Geforce FX 5500 128 bit 128MB 270/400Mhz has bad memory timings by default. Slower than Inno3d Geforce FX 5200 128 bit 128MB 250/400Mhz.

Inno3d Geforce FX 5200 128 bit 128MB 250/400Mhz

5200.png
Filename
5200.png
File size
5.39 KiB
Views
821 views
File license
Public domain

PROLINK Geforce FX 5500 128 bit 128MB 270/400Mhz

5500a.png
Filename
5500a.png
File size
5.4 KiB
Views
821 views
File license
Public domain

In some games, the difference can be as much as 30%.

Test_Timing_05.png
Filename
Test_Timing_05.png
File size
11.66 KiB
Views
821 views
File license
Public domain
NFS.png
Filename
NFS.png
File size
74.5 KiB
Views
821 views
File license
Public domain

Reply 50 of 56, by analog_programmer

User metadata
Rank Oldbie
Rank
Oldbie

Takedasun, of course the unoptimized memory timing have bad impact on performance, but some RAM chips doesn't support tighter timings compared to others for the same memory frequency.

This nTiming software seems to be much more useful than NiBiTor for decoding of BIOS memory timings, where can I get it?

P.S. Thought this "nTiming" is part from nVidia BIOS Modifier v3.x, but it's not.

Last edited by analog_programmer on 2023-11-29, 13:03. Edited 1 time in total.

from СМ630 to Ryzen gen. 3
engineer's five pennies: this world goes south since everything's run by financiers and economists
this isn't voice chat, yet some people, overusing online communications, "talk" and "hear voices"

Reply 51 of 56, by Putas

User metadata
Rank Oldbie
Rank
Oldbie
Takedasun wrote on 2023-11-29, 08:49:

In some games, the difference can be as much as 30%.

Holy Moly, I did not expect anything that big with these mature cards. Do you know other such cases?

Reply 52 of 56, by Takedasun

User metadata
Rank Newbie
Rank
Newbie
Putas wrote on 2023-11-29, 09:25:
Takedasun wrote on 2023-11-29, 08:49:

In some games, the difference can be as much as 30%.

Holy Moly, I did not expect anything that big with these mature cards. Do you know other such cases?

BloodRayne 2 has the most neglected case, other games have an average difference of 5-20%.
I only did research on these two video cards. Because I noticed an anomaly in the test results.

There are advantages of bad timings, video memory can be easily overclocked up to 550MHz.

analog_programmer wrote on 2023-11-29, 09:15:

Takedasun, fo course the unoptimized memory timing have bad impact on performance, but some RAM chips doesn't support tighter timings compared to others for the same memory frequency.

This nTiming software seems to be much more useful than NiBiTor for decoding of BIOS memory timings, where can I get it?

I'll have to look, I did the tests three years ago.

Attachments

  • Filename
    nTimings.zip
    File size
    223.32 KiB
    Downloads
    30 downloads
    File license
    Public domain

Reply 53 of 56, by analog_programmer

User metadata
Rank Oldbie
Rank
Oldbie
Takedasun wrote on 2023-11-29, 12:00:

I'll have to look, I did the tests three years ago.

Thank you very much for the attached software! It will be of use for me when I'm optimizing older nVidia BIOSes.

from СМ630 to Ryzen gen. 3
engineer's five pennies: this world goes south since everything's run by financiers and economists
this isn't voice chat, yet some people, overusing online communications, "talk" and "hear voices"

Reply 54 of 56, by ODwilly

User metadata
Rank l33t
Rank
l33t
RandomStranger wrote on 2023-11-21, 09:48:
Of course it is. And it's still being done on both sides. The only reason the for the RTX4060 or RX6600 to have fewer shader uni […]
Show full quote
analog_programmer wrote on 2023-11-21, 07:47:

it's more of a marketing strategy.

Of course it is. And it's still being done on both sides.
The only reason the for the RTX4060 or RX6600 to have fewer shader units than the Ti/XT variant. It's the same GPU. In the late Core2 era Intel manufactured only one or two quad core desktop CPUs and they disabled parts of the cache and some of the cores for the lower market segments. Sometimes they were faulty and couldn't reach the specs of the flagship model, but as time went it grew to be rare. Same with Phenom II. They were famous for being able to unlock the disabled CPU cores and maybe some Athlons could unlock the L3 cache, but I'm not sure about that.

It's just makes business sense to serve all price ranges with the smallest amount of product while blocking easy access to free upgrades.

Off topic but I really liked the Sempron 145 for this. Most cheap am3/+ boards support core unlocking and like 90% of the Semprons are low wattage Athlon dual cores in disguise.

Main pc: Asus ROG 17. R9 5900HX, RTX 3070m, 16gb ddr4 3200, 1tb NVME.
Retro PC: Soyo P4S Dragon, 3gb ddr 266, 120gb Maxtor, Geforce Fx 5950 Ultra, SB Live! 5.1

Reply 55 of 56, by W.x.

User metadata
Rank Member
Rank
Member
Takedasun wrote on 2023-11-29, 12:00:
BloodRayne 2 has the most neglected case, other games have an average difference of 5-20%. I only did research on these two vide […]
Show full quote
Putas wrote on 2023-11-29, 09:25:
Takedasun wrote on 2023-11-29, 08:49:

In some games, the difference can be as much as 30%.

Holy Moly, I did not expect anything that big with these mature cards. Do you know other such cases?

BloodRayne 2 has the most neglected case, other games have an average difference of 5-20%.
I only did research on these two video cards. Because I noticed an anomaly in the test results.

There are advantages of bad timings, video memory can be easily overclocked up to 550MHz.

I also never saw such huge differances, not only because of memory timings, but even frequencies, and that even in big differances (like 400mhz vs 600mhz).
30% differance in performance occurs usually between 128-bit / 64-bit (memory bus width) version of same card .. in that case, bandwidth of memory is doubled (on 128-bit version), and you usually get only up to 30% of performance from this huge impact.

When you change timings, that usually means only like few megaherz memory differance. For example, I've tested recently DDR3 vs DDR2 memory on same graphic card, what differance will make differance of timings (DDR3 has usually much worse timings than DDR2). Differance was only about 3% in 3dmark 2005.
Graphic card underclocking questions, memory bandwidth DDR2 vs DDR3 (solved)
In other words, DDR3 version, had to be clocked at 510Mhz with worse timings, instead 480Mhz (DDR2 with better timings).

I am suprised, that there are games/benchmarks, that will cause such big impact, and will retest it. Maybe, when there is memory pure bottleneck, can differance increase much more, but still... I think there will be something other in this case, than just worse timings. Maybe original FX5500 BIOS is corrupted? And timings are unreaslitic bad? Never saw such huge differance, it would be seen in official benchmarks and reviews, where there are usually tested even 10-12 different cards with same core and same (or similiar) frequences. Never saw that one card have like more than 10% of performance of other card (of same type, and frequencies).

And particualary at Geforce 2 - 6 generation, I remember, how all cards were very even and differances in graphs were almost non-existent.

That's why, I am suspicious, that that original FX5500 card is somehow screwed... or its BIOS.

Reply 56 of 56, by Putas

User metadata
Rank Oldbie
Rank
Oldbie

I also lived with the golden rule of mature 3d accelerators being resistant to memory latency. Maybe it is some FX specific effect. Like with its register pressure while doing PS 2.0 making more CPU-like memory requests.