VOGONS


High-end Socket462/A build.

Topic actions

Reply 40 of 49, by Archer57

User metadata
Rank Member
Rank
Member

Yeah, may be i just do not like buying lootboxes, or may be local options are less good, i do not know. I'd totally buy a board like that though - already pulled out of some box, cleaned to a degree, with known model/specs and in known state. Replacing capacitors is a matter of hour or two, no issue at all. To a degree i'd prefer that over already repaired because there is no way to know how well the repair was done. For me this is a much better way to get something i want for cheap, instead of hunting for it in lootboxes.

I absolutely hate gigabyte stuff at this point too. I mean i try to avoid whole love/hate thing towards brands as much as i can, but so far not a single thing from gigabyte has worked out for me. Not back then, not now. They are always garbage. Those GA-7N400S-L i initially intended for this build is one more example of that and i think it finally made me intentionally avoid their boards. On paper it is a great board - 12V VRM, MCP2-R is used instead of older MCP2 on epox boards which provides 2 extra USB ports and 2 SATA ports which actually work great (no compatibility issues with new drives). It is also in great shape, seemingly was not used a lot (i wonder why...) or has been cleaned very well. BIOS + general nforce2 issues absolutely kill it though. It does not set memory voltage consistently as it should, does not give an option to change or even see it. It hangs for a minute on HDD detection unless something is connected to IDE or controllers are completely disabled. The same for SATA. It does not set CPU voltage correctly either (simply 1.65V always, even for 1.5/1.6V CPUs) and does not show it. This are just critical issues - the fact that BIOS is lacking many useful option on seemingly high-end (nf2ultra400+mcp2-r) board is just an icing on the cake.

I'll probably stick a 2500+ barton + a couple of 166Mhz 1GB sticks and sell i to some unfortunate guy as a kit. It'll absolutely work like this with 166FSB and memory and going higher... would not be my problem at that point...

Reply 41 of 49, by AlexZ

User metadata
Rank Oldbie
Rank
Oldbie

I have Gigabyte GA-7N400 Pro2 which is almost the same thing as GA-7N400S-L. It only allows to increase voltage, not set it to a specific value. It does allow to set FSB, memory, AGP speed independently, but FSB can be set to 4 fixed values only. CPU volage is displayed as OK only. Good for customers who do not want to overclock, which is not necessary these days. It has Gigabyte smart fan control so custom fan curve can be used in Windows. I use this feature in Socket 754. It has multiplier settings via DIP. It is an average late Athlon XP board, with SATA, P4 ATX connector and fast nvidia chipset. I wouldn't rate it as bad for retro rigs at all, just not right for overclockers.

Gigabyte was designed for non-demanding customers in mind. Socket 754/AM2/AM3 Gigabyte boards are much better in this regard and have more options.

I will be selling it and just keep whatever doesn't sell. It will need recap in the future, there are 2 slightly bulging caps, although not leaking. I just set an above average selling price and then lower it very slowly with time, depending on viewings and watchers.

Pentium III 900E, ECS P6BXT-A+, 384MB RAM, GeForce FX 5600 128MB, Voodoo 2 12MB, Yamaha SM718 ISA
Athlon 64 3400+, Gigabyte GA-K8NE, 2GB RAM, GeForce GTX 260 896MB, Sound Blaster Audigy 2 ZS
Phenom II X6 1100, Asus 990FX, 32GB RAM, GeForce GTX 980 Ti

Reply 42 of 49, by agent_x007

User metadata
Rank Oldbie
Rank
Oldbie
Archer57 wrote on 2025-06-06, 00:29:
So I've ran a few benchmarks: […]
Show full quote

So I've ran a few benchmarks:

The attachment 200Mhz_03_05.jpg is no longer available
The attachment 166Mhz_03_05.jpg is no longer available

Ultimately yes, both 2003 and 2005 are limited by GPU even at this resolution. Framerates are basically the same.

Limited by GPU ?
They are not. 7600 GS @ GT scores on Core 2 platform (and yes, it's AGP version) :

The attachment 3DMark 01SE.PNG is no longer available
The attachment 3DMark 03.PNG is no longer available
The attachment 3DMark 05.PNG is no longer available

Note : If you have trouble with games/tests with DX10 cards under WinXP, try 258.96 driver (with max. supported OpenGL at v3.3).

Reply 43 of 49, by AlexZ

User metadata
Rank Oldbie
Rank
Oldbie
agent_x007 wrote on 2025-06-14, 18:23:

They are not. 7600 GS @ GT scores on Core 2 platform (and yes, it's AGP version) :

Archer57's statement is correct, there is evidence in this topic to prove it. These are facts.

At the same time, you can squeeze more performance out of 7600 GT by using a more modern CPU. This does not invalidate Archer57's statement as both are valid ways to improve performance.

I was able to squeeze much more out of it by doing what we did back then historically - just buy a faster GPU instead of using a much faster CPU like you did. It does continue to scale in 3d mark 2003 which is the most relevant benchmark for Athlon XP followed by 3d mark 2005 where it doesn't help due to CPU being overloaded. Interestingly, a slow CPU + GeForce GTX 260 still easily beats Core 2 Duo in 3d mark 2005 which represents CPU heavy games not suitable for Athlon XP. We used Athlon 64 3400+ at the same frequency for comparison as it's the closest CPU to Athlon XP 3200+ without having to spend a kidney on faster GPUs.

The benefit of GPU faster than 7600 GT is mainly ability to play games that already worked fine in the highest possible resolution. It is the same argument as GeForce FX 5900.

Pentium III 900E, ECS P6BXT-A+, 384MB RAM, GeForce FX 5600 128MB, Voodoo 2 12MB, Yamaha SM718 ISA
Athlon 64 3400+, Gigabyte GA-K8NE, 2GB RAM, GeForce GTX 260 896MB, Sound Blaster Audigy 2 ZS
Phenom II X6 1100, Asus 990FX, 32GB RAM, GeForce GTX 980 Ti

Reply 44 of 49, by agent_x007

User metadata
Rank Oldbie
Rank
Oldbie

When I see +10% (3DMark 05) or +20% (3DMark 03) of performance increase by swapping CPU/test platform - I count that as CPU/platform being too slow to get everything out of the GPU (ie. CPU/platform limitation, not a GPU* one [*as mentioned by Archer57]).

Do not assume, that a higher 3DMark performance with faster GPU means "clearly first GPU wasn't fast enough", and "we have headroom to drop even faster GPU in here". It really doesn't work that way in actual games.

I get that when using way faster GPUs you can still see an increase in performance score (ie. "scaling"), BUT there is a point where Athlon XP 3200+ or Athlon64 3400+ are simply "not good enough".
That point is reached when you see a clearly higher GPU performance on faster platform.
The likeliness of this issue appearing in actual game though is directly linked to how much performance was left on the table between two platforms (the bigger the gap, the higher the chances), AND in which test(s) it was seen.
In our case for 3DMark 03, I got 2x performance uplift on of DX7 test of 3DMark 03 (wings of fury), and almost 2x in Pixel Shader 2.0 feature test (from using Core 2 vs. Athlon XP).
This suggest older games can benefit most from this (DX7), and some complicated pixel shader ones (but this case is limited to exact or very similar shader used by 3DMark team).

Note : Throwing an even faster graphics card at the tests mentioned above may result in even more issues.
GPU being starved out due to CPU/platform speed is very bad (especially on .1% metric), and raising AA level or increasing resolution will not help in getting playability back (in extreme cases, only limiting/capping FPS manually can help).

Please, just keep those things in mind when you encounter issues on fastest GPUs in slower platforms.

Reply 45 of 49, by AlexZ

User metadata
Rank Oldbie
Rank
Oldbie
agent_x007 wrote on 2025-06-14, 21:20:

When I see +10% (3DMark 05) or +20% (3DMark 03) of performance increase by swapping CPU/test platform - I count that as CPU/platform being too slow to get everything out of the GPU (ie. CPU/platform limitation, not a GPU* one [*as mentioned by Archer57]).

That is a very one sided view. Things get clearly CPU bottlenecked only in 3d mark 2005. Evidence is in this thread. It doesn't matter if you can't squeeze out the GPU fully. By upgrading the GPU you can still get more FPS.

agent_x007 wrote on 2025-06-14, 21:20:

Do not assume, that a higher 3DMark performance with faster GPU means "clearly first GPU wasn't fast enough", and "we have headroom to drop even faster GPU in here". It really doesn't work that way in actual games.

We were looking at FPS values instead of 3d mark scores. 3d mark 2003 represents games that aren't as CPU heavy as 2005 and later games. I provided explanation of practical usability of this performance boost in this thread.

agent_x007 wrote on 2025-06-14, 21:20:

Note : Throwing an even faster graphics card at the tests mentioned above may result in even more issues.
GPU being starved out due to CPU/platform speed is very bad (especially on .1% metric), and raising AA level or increasing resolution will not help in getting playability back (in extreme cases, only limiting/capping FPS manually can help).

Please, just keep those things in mind when you encounter issues on fastest GPUs in slower platforms.

This is a known consequence. I have commented on this that a faster GPU may result in increased variance in FPS. 100s of FPS in CPU easy scenes and then a sudden drop to 10-20fps in CPU heavy scenes. It can also manifest as micro-stuttering. That is most likely to happen in games represented by 3d mark 2005.

Vertical sync should be enabled to free up CPU. This works best with LCD and not well with CRT as 85-100hz limit is too high for Athlon 64/XP. This may help with micro-stuttering but not with CPU heavy scenes.

If you would like to run any further benchmarks then feel free to make suggestions to keep this a fact based discussion.

I use newer GPU on all my rigs - PIII 900 has GeForce FX5600, Athlon 64 3400+ has GeForce GTX 260, Phenom X4 9950 will have GeForce GTX 780 and Phenom II X6 1100T has GeForce GTX 980 (only because RTX 2080 died after a few months, not buying another one). I prefer the GPU to sit idle, it consumes less power and that makes a quieter system.

Pentium III 900E, ECS P6BXT-A+, 384MB RAM, GeForce FX 5600 128MB, Voodoo 2 12MB, Yamaha SM718 ISA
Athlon 64 3400+, Gigabyte GA-K8NE, 2GB RAM, GeForce GTX 260 896MB, Sound Blaster Audigy 2 ZS
Phenom II X6 1100, Asus 990FX, 32GB RAM, GeForce GTX 980 Ti

Reply 46 of 49, by Archer57

User metadata
Rank Member
Rank
Member
agent_x007 wrote on 2025-06-14, 18:23:

Limited by GPU ?
They are not. 7600 GS @ GT scores on Core 2 platform (and yes, it's AGP version) :

Note : If you have trouble with games/tests with DX10 cards under WinXP, try 258.96 driver (with max. supported OpenGL at v3.3).

Fun benchmarks...

Perhaps this whole discussion on what's the bottleneck does not make much sense at all, because it is not that simple and is not as binary. Add better GPU and you get better performance. Replace CPU/platform and you get better performance. Replace both and you get hugely better performance. There are no "perfect" combinations and ones which make no sense at all are only at the very extremes, like trying to pair pentium2 with HD3850.

Basically if you want to push specific platform as far as possible it makes sense to use as fast GPU as possible, if you want to get as much as you can from specific GPU - using as fast platform/CPU as possible makes sense. If you want to run games with limited budget - certain mix of two, usually heavily favoring GPU, will work the best. I am trying to get as much as possible from specific platform here...

On top of that - everything depends on specific applications a lot.

What i was referring to there is that dropping CPU frequency while keeping the same CPU/platform does not significantly affect scores in 3dmark with the same GPU.

As for stuff other than 3dmark... yes, at this point, after i've experimented some more, i am quite certain that 7600GT is more than enough here. Unless i try to do something silly like running stuff at 1920x1080.

agent_x007 wrote on 2025-06-14, 21:20:
I get that when using way faster GPUs you can still see an increase in performance score (ie. "scaling"), BUT there is a point w […]
Show full quote

I get that when using way faster GPUs you can still see an increase in performance score (ie. "scaling"), BUT there is a point where Athlon XP 3200+ or Athlon64 3400+ are simply "not good enough".
That point is reached when you see a clearly higher GPU performance on faster platform.
The likeliness of this issue appearing in actual game though is directly linked to how much performance was left on the table between two platforms (the bigger the gap, the higher the chances), AND in which test(s) it was seen.
In our case for 3DMark 03, I got 2x performance uplift on of DX7 test of 3DMark 03 (wings of fury), and almost 2x in Pixel Shader 2.0 feature test (from using Core 2 vs. Athlon XP).
This suggest older games can benefit most from this (DX7), and some complicated pixel shader ones (but this case is limited to exact or very similar shader used by 3DMark team).

Keep in mind that those "performance uplift" happens at framerates which are completely useless for anything practical. This is recurring theme with CPU performance in games, old or new. You see 200FPS vs 300FPS and it is seemingly a huge difference, but in practice this means no difference at all because FPS will be limited by vsync, probably at 60-75 for this old games, and that huge numbers along with CPU limitations will never be reached.

So no, older (DX7) games will not benefit from better CPU at all.

This is actually why i completely stopped following and am no longer interested in reviews of modern CPUs for modern games - reviewers run stuff at silly, useless FPS and then say that 300 vs 350 is "a huge difference and this CPU is absolutely better". I do understand where they are coming from with this, but... it is completely pointless.

If you look at benchmarks which run closer to framerates which will actually be useful, like troll's lair or mother nature from 03, you see much, much smaller difference. And yes, i do understand that this are average framerates...

agent_x007 wrote on 2025-06-14, 21:20:

Note : Throwing an even faster graphics card at the tests mentioned above may result in even more issues.
GPU being starved out due to CPU/platform speed is very bad (especially on .1% metric), and raising AA level or increasing resolution will not help in getting playability back (in extreme cases, only limiting/capping FPS manually can help).

Please, just keep those things in mind when you encounter issues on fastest GPUs in slower platforms.

This issues can exist. Mainly when drivers are too new and/or simply garbage. This usually the case with using too new cards, not too powerful ones. So older high-end ones would work well while newer, even slower ones may have issues.

GF7 on S462 is definitely not even close to this though, this things were totally used in combination even back then....

Reply 47 of 49, by Joseph_Joestar

User metadata
Rank l33t++
Rank
l33t++

On the topic of bottlenecks, here's a method that I personally use to see what the system limits are.

First, install a game which has a built-in benchmark (e.g. Doom 3, F.E.A.R. or Splinter Cell: Chaos Theory) and run it in three different resolutions. Usually, I use 800x600, 1024x768 and 1600x1200. Now, if you're seeing almost the same benchmark score for all three resolutions, then you are CPU bottlenecked. This is frequently the case when using an overkill GPU like a GTX 980 Ti for WinXP gaming. However, if you're seeing the benchmark score noticeably drop as the resolution increases, then you are GPU bottlenecked.

PC#1: Pentium MMX 166 / Soyo SY-5BT / S3 Trio64V+ / Voodoo1 / YMF719 / AWE64 Gold / SC-155
PC#2: AthlonXP 2100+ / ECS K7VTA3 / Voodoo3 / Audigy2 / Vortex2
PC#3: Core 2 Duo E8600 / Foxconn P35AX-S / X800 / Audigy2 ZS
PC#4: i5-3570K / MSI Z77A-G43 / GTX 980Ti / X-Fi Titanium

Reply 48 of 49, by Trashbytes

User metadata
Rank Oldbie
Rank
Oldbie
Joseph_Joestar wrote on Yesterday, 06:03:

On the topic of bottlenecks, here's a method that I personally use to see what the system limits are.

First, install a game which has a built-in benchmark (e.g. Doom 3, F.E.A.R. or Splinter Cell: Chaos Theory) and run it in three different resolutions. Usually, I use 800x600, 1024x768 and 1600x1200. Now, if you're seeing almost the same benchmark score for all three resolutions, then you are CPU bottlenecked. This is frequently the case when using an overkill GPU like a GTX 980 Ti for WinXP gaming. However, if you're seeing the benchmark score noticeably drop as the resolution increases, then you are GPU bottlenecked.

Fear is actually a really good benchmark due to how punishing it can be on the CPU with its physics and AI, Chaos Theory would be great for pushing GPU performance along with every ones fav .. Farcry/Crysis.

Reply 49 of 49, by Archer57

User metadata
Rank Member
Rank
Member
Joseph_Joestar wrote on Yesterday, 06:03:

On the topic of bottlenecks, here's a method that I personally use to see what the system limits are.

First, install a game which has a built-in benchmark (e.g. Doom 3, F.E.A.R. or Splinter Cell: Chaos Theory) and run it in three different resolutions. Usually, I use 800x600, 1024x768 and 1600x1200. Now, if you're seeing almost the same benchmark score for all three resolutions, then you are CPU bottlenecked. This is frequently the case when using an overkill GPU like a GTX 980 Ti for WinXP gaming. However, if you're seeing the benchmark score noticeably drop as the resolution increases, then you are GPU bottlenecked.

Yep, this is a great way to know it for a specific application. Technically benchmark is not even needed, simple FPS counter can be sufficient to get a rough idea - if performance does not substantially increase with decreased resolution - CPU is likely the reason.

But usefulness of this is limited to a specific application - different games have very different requirements and on the same system may be bottlenecked by different things.