VOGONS


High-end Socket462/A build.

Topic actions

Reply 20 of 49, by Archer57

User metadata
Rank Member
Rank
Member

What's curious is - 7600GT is 2006, 9800GT is 2008 IIRC. So what we are doing here is running years old 3dmark on newer hardware. I stopped bothering with 3dmark somewhere in early 2010s, but from what i remember before that it has always been very hard to run and normal values for mid range modern hardware were within 10-20FPS. So yeah, 2005 is probably more appropriate.

I'll run 2005 later, but i suspect it will be quite slow...

Also yeah, i've seen 7950GT, even a few of them. A few hundred $. Not unreasonable, but probably not something i'd want to buy. Especially given this GPUs are not super reliable and there is well known way to sell faulty ones as working without any way to catch that or any hope for return/refund.

7800GS... having to buy specific version is unreasonably had. In many cases people do not have a clue what they are selling with this old hardware, like i've seen people unable to answer what type of memory the card uses even in cases it turns out it actually uses the better option. So being sure that the card in question is the one that's needed would be near impossible.

At this point the only reasonable way for me to obtain one of this card is if i find someone unknowingly selling a PC with one or something like that. Or trying luck with a dead one, but even dead ones can be pricey nowadays.

Reply 21 of 49, by AlexZ

User metadata
Rank Oldbie
Rank
Oldbie

Results for GeForce GTX 260 (seems to be OCed, GPU clock 625Mhz).

3d mark 2003 breakdown, 1024x768, Athlon 64 3400+, GeForce GTX 260:

  • Wings of Fury - 326 fps
  • Battle of Proxycon - 334 fps
  • Troll's Lair - 247 fps
  • Mother Nature - 240 fps

3d mark 2003 breakdown, 1600x1200, Athlon 64 3400+, GeForce GTX 260:

  • Wings of Fury - 307 fps
  • Battle of Proxycon - 244 fps
  • Troll's Lair - 196 fps
  • Mother Nature - 206 fps

We got a nice boost over 9800 GT, clearly 3d mark 2003 is not CPU bottlenecked. This represents games from 2003-2004 era, or later non demanding games.

3d mark 2005 breakdown, 1024x768, Athlon 64 3400+, GeForce GTX 260:

  • Return To Proxycon - 35fps
  • Firefly Forest - 28fps
  • Canyon Flight - 76fps

Practically zero boost over 9800 GT. MSI afterburner reveals 100% CPU utilization, including heavy screens in Return To Proxycon with lows down to 27 fps. This represents CPU heavy games from 2005 and later.

Pentium III 900E, ECS P6BXT-A+, 384MB RAM, GeForce FX 5600 128MB, Voodoo 2 12MB, Yamaha SM718 ISA
Athlon 64 3400+, Gigabyte GA-K8NE, 2GB RAM, GeForce GTX 260 896MB, Sound Blaster Audigy 2 ZS
Phenom II X6 1100, Asus 990FX, 32GB RAM, GeForce GTX 980 Ti

Reply 22 of 49, by Archer57

User metadata
Rank Member
Rank
Member

Very interesting results. This pretty much showcases what i would be concerned about if i wanted to upgrade GPU in this system - would any actually meaningful improvement happen? Sure the scores can go up, but...

What you benchmarks show - upgrading GPU provided no useful benefits here.

It increased FPS from already way too high numbers like 250 to even higher like 330 (looking at "Battle of Proxycon" 03). Which is useless because it was more than fast enough already. If you wanted to play a game like this you would likely use vsync which would simply result in low GPU load and completely stable FPS on either card.

It did not increase FPS in later, more demanding games because here it is limited by CPU. So it would be absolutely no help in a later game like this.

It also illustrates quite well why total 3dmark score, while useful to simply compare GPUs at a glance, may not represent actual performance difference in real applications. And why it might make more sense to use newer, more appropriate for GPUs versions of 3dmark to compare systems like this because they may show practical difference (or lack of it) better.

I'll definitely run newer 3dmark on my system later and also perhaps run it at different CPU frequency - it should be easy enough to do at 100/133/166/200 FSB without actually swapping any hardware...

Reply 23 of 49, by AlexZ

User metadata
Rank Oldbie
Rank
Oldbie

The only benefit a high-end GPU would bring to Athlon XP is playing games that were already playable before in higher resolution. It would not help with newer CPU demanding games. Don't bother unless you find it cheap.

In my case, my CRT monitor cannot go beyond 1600x1200 and even this resolution is not practical due to very low 60Hz refresh rate. The highest practical resolution is 1280x960 at 85Hz. GeForce GTX 260 also runs quite hot, about 48'C when idle and 80'C under load, thus being noisier. It will require repasting and some fan maintenance to make it usable. Another high-end 9800 GT is coming that should be somewhere in-between stock 9800 GT and GTX 260, but that has no impact on our findings.

Pentium III 900E, ECS P6BXT-A+, 384MB RAM, GeForce FX 5600 128MB, Voodoo 2 12MB, Yamaha SM718 ISA
Athlon 64 3400+, Gigabyte GA-K8NE, 2GB RAM, GeForce GTX 260 896MB, Sound Blaster Audigy 2 ZS
Phenom II X6 1100, Asus 990FX, 32GB RAM, GeForce GTX 980 Ti

Reply 24 of 49, by Archer57

User metadata
Rank Member
Rank
Member

So I've ran a few benchmarks:

The attachment 200Mhz_03_05.jpg is no longer available
The attachment 166Mhz_03_05.jpg is no longer available

Ultimately yes, both 2003 and 2005 are limited by GPU even at this resolution. Framerates are basically the same.

AlexZ wrote on 2025-06-05, 07:45:

In my case, my CRT monitor cannot go beyond 1600x1200 and even this resolution is not practical due to very low 60Hz refresh rate. The highest practical resolution is 1280x960 at 85Hz. GeForce GTX 260 also runs quite hot, about 48'C when idle and 80'C under load, thus being noisier. It will require repasting and some fan maintenance to make it usable. Another high-end 9800 GT is coming that should be somewhere in-between stock 9800 GT and GTX 260, but that has no impact on our findings.

Yeah, even though i use modern monitor for me 1280x1024 is the limit most of the time. Most of the games which would run on this system do not support widescreen and that's the highest resolution which fits. I also often use 1024x768 because i want larger UI and do not mind scaling/softer image that much.

What's curious about faster card and heat - what you are seeing is with silly FPS at max possible load. If you cap it by vsync you may see different results. Newer cards are often more efficient in terms of performance per watt and tend to have better coolers, so newer card may very well be cooler outside of benchmarks. Not saying it is the case in specific comparison, but it often is.

That's one reason i like GTX750 for XP with pci-e so much - it is small <70W card, usually even without power connector, which runs most reasonable XP stuff extremely well. Not sure how well it'd work on an older system like 754 with pci-e though...

I've also spent a bit more time messing around with that HD2600XT (yeah, it is XT, not pro, i remembered that wrong). Assembled a system to test on with that EP-8RDA (since gigabyte one truly sucks) with 2000+ CPU and a gig of RAM (which i spent whole evening selecting from a box of RAM i have in order for it to work):

The attachment EP-8RDA_.jpg is no longer available

And suddenly the card... just worked. Could run all the benchmarks, etc, completely stable. It turns out as long as FSB is 133Mhz it works. At 166 or 200 it does not. The simplest test that crashes with 100% probability is second pixel shader test in 2001SE. No other settings matter - memory frequency (tried running it async as counterproductive as it is on nforce2), AGP frequency (there is an option to set it to 50, which i tried), disabling 8x, fast writes, whatever - as soon as FSB is 166 or 200 it is unstable. Have no idea of why or how CPU FSB is related to GPU, but it is what it is. The system is also completely stable at 166 with different card.

Also inspecting the card carefully i've found what seems like a leaky cap?:

The attachment 2600XT_cap.jpg is no longer available

There seems to be dust stuck to PCB in highlighted area which would indicate some sticky residue. Will probably swap that just to be sure.

And i really need to find something other than nforce2 with AGP to test with...

Reply 25 of 49, by nd22

User metadata
Rank Oldbie
Rank
Oldbie

The 2003 score is about right:

Reply 26 of 49, by AlexZ

User metadata
Rank Oldbie
Rank
Oldbie

Perhaps the problem isn't in HD2600XT but in-between it and the CPU. We are dealing with worn out parts that may behave weird.

The high-end GeForce 9800 GT I got yesterday is 99% stable. During 6-7 hour testing period I have seen it display weird artifacts and freeze the system twice while switching resolution in game/benchmark. It was not reproducible. Otherwise it works normally, benchmarks pass, games have no artifacts. Probably worth cleaning&repasting, the problem could be temperature related.

I had one GeForce 9800 GT that was stable except for 1024x768 resolution. It passed benchmarks, load tests, everything worked fine but in certain games 1024x768 caused the system to freeze and display weird artifacts. In Windows and benchmarks 1024x768 worked fine.

My stock GeForce 9800 GT doesn't exhibit such weird issues so I can use it as a baseline.

GeForce GTX 260 is stable, the only problem is Sub Command crashes when switching resolution.

The problem of these old GeForce 9800 GT cards is nvidia driver reduces fan speed to quiet mode and doesn't rev it up under load. Riva tuner can adjust the fan speed via low level card settings but there is one speed only. MSI afterburner cannot do it, same as Riva tuner via driver. Both my stock and high-end 9800 GT exhibit this issue and run hotter than they need to.

I do not want to use newer cards, I have GTX 480 but it has really bad stuttering in Need For Speed Hot Pursuit 2. I tested various drivers. The problem is probably in driver code. I have a GTX 770 but it doesn't fit into the case. I only buy GeForce *70/*80 models that are cheap and ask for few days warranty for testing. As long as the card is 99% stable and doesn't crash during playing it's fine.

Pentium III 900E, ECS P6BXT-A+, 384MB RAM, GeForce FX 5600 128MB, Voodoo 2 12MB, Yamaha SM718 ISA
Athlon 64 3400+, Gigabyte GA-K8NE, 2GB RAM, GeForce GTX 260 896MB, Sound Blaster Audigy 2 ZS
Phenom II X6 1100, Asus 990FX, 32GB RAM, GeForce GTX 980 Ti

Reply 27 of 49, by Archer57

User metadata
Rank Member
Rank
Member
AlexZ wrote on 2025-06-06, 08:10:

Perhaps the problem isn't in HD2600XT but in-between it and the CPU. We are dealing with worn out parts that may behave weird.

And motherboard full of random salvaged capacitors, yes. But this is weird and i like solving stuff like this, it is so much fun - like playing detective, but with real things and without any nastiness 😀

I had a lot of fun fixing that 7300GT which now works perfectly, i'll try to figure out whats wrong in this case too. Worst case if everything fails i'll try to reflow the GPU and see if that changes anything - it usually allows to see if GPU is faulty by making it work briefly. Technically it should be possible to get pci-e version as a donor for the GPU too, if it is dead, since pci-e ones are cheap/uninteresting.

What's weird to me is that it crashes in very specific places, like that pixel shader benchmark i've mentioned. With that benchmark disabled it can run through 3dmark a dozen times and be perfectly stable.

Reply 28 of 49, by Trashbytes

User metadata
Rank Oldbie
Rank
Oldbie

Are you using a fresh install with each of these driver switches between nVidia and ATI ? or using DDU at the very least ? because all of this sounds like something has gotten corrupted deep in the directX D3D driver stack if its crashing at the use of shaders but runs fine otherwise.

Since its XP this actually isn't unusual and back in the day switching between ATI and nVidia GPUs sometimes required a fresh install to clear directX oddities even if you used DDU as XP could corrupt registry and drivers with no indication till directx fell over.

If its a fresh install each time or DDU then I would be suspicious of the GPU or a controller downstream of it ..like the northbridge/southbridge overheating when the GPU starts demanding/sending more data over the bus when shaders kick in.

just my 2 cents.

Reply 29 of 49, by Trashbytes

User metadata
Rank Oldbie
Rank
Oldbie
Archer57 wrote on 2025-06-06, 00:29:
So I've ran a few benchmarks: […]
Show full quote

So I've ran a few benchmarks:

The attachment 200Mhz_03_05.jpg is no longer available
The attachment 166Mhz_03_05.jpg is no longer available

Ultimately yes, both 2003 and 2005 are limited by GPU even at this resolution. Framerates are basically the same.

AlexZ wrote on 2025-06-05, 07:45:

In my case, my CRT monitor cannot go beyond 1600x1200 and even this resolution is not practical due to very low 60Hz refresh rate. The highest practical resolution is 1280x960 at 85Hz. GeForce GTX 260 also runs quite hot, about 48'C when idle and 80'C under load, thus being noisier. It will require repasting and some fan maintenance to make it usable. Another high-end 9800 GT is coming that should be somewhere in-between stock 9800 GT and GTX 260, but that has no impact on our findings.

Yeah, even though i use modern monitor for me 1280x1024 is the limit most of the time. Most of the games which would run on this system do not support widescreen and that's the highest resolution which fits. I also often use 1024x768 because i want larger UI and do not mind scaling/softer image that much.

What's curious about faster card and heat - what you are seeing is with silly FPS at max possible load. If you cap it by vsync you may see different results. Newer cards are often more efficient in terms of performance per watt and tend to have better coolers, so newer card may very well be cooler outside of benchmarks. Not saying it is the case in specific comparison, but it often is.

That's one reason i like GTX750 for XP with pci-e so much - it is small <70W card, usually even without power connector, which runs most reasonable XP stuff extremely well. Not sure how well it'd work on an older system like 754 with pci-e though...

I've also spent a bit more time messing around with that HD2600XT (yeah, it is XT, not pro, i remembered that wrong). Assembled a system to test on with that EP-8RDA (since gigabyte one truly sucks) with 2000+ CPU and a gig of RAM (which i spent whole evening selecting from a box of RAM i have in order for it to work):

The attachment EP-8RDA_.jpg is no longer available

And suddenly the card... just worked. Could run all the benchmarks, etc, completely stable. It turns out as long as FSB is 133Mhz it works. At 166 or 200 it does not. The simplest test that crashes with 100% probability is second pixel shader test in 2001SE. No other settings matter - memory frequency (tried running it async as counterproductive as it is on nforce2), AGP frequency (there is an option to set it to 50, which i tried), disabling 8x, fast writes, whatever - as soon as FSB is 166 or 200 it is unstable. Have no idea of why or how CPU FSB is related to GPU, but it is what it is. The system is also completely stable at 166 with different card.

Also inspecting the card carefully i've found what seems like a leaky cap?:

The attachment 2600XT_cap.jpg is no longer available

There seems to be dust stuck to PCB in highlighted area which would indicate some sticky residue. Will probably swap that just to be sure.

And i really need to find something other than nforce2 with AGP to test with...

AGP bus speed is linked to FSB through dividers, on most AGP 8X systems its locked to 66Mhz separate from the FSB so one shouldn't affect the other but not all boards can lock the AGP bus speed and as FSB increases so does the AGP bus speed, not many AGP cards will accept speeds above 66Mhz and ATI cards are pretty bad for this, nvidia fares better.

No idea about your board but you may want to do a bit of digging and see if your board is able to lock the AGP/CPU/DDR bus speeds independent from each other, if it cant that may explain the instability you are seeing in regards to higher bus speeds.

https://en.wikipedia.org/wiki/Accelerated_Graphics_Port

Reply 30 of 49, by Archer57

User metadata
Rank Member
Rank
Member
Trashbytes wrote on 2025-06-06, 09:20:
Are you using a fresh install with each of these driver switches between nVidia and ATI ? or using DDU at the very least ? becau […]
Show full quote

Are you using a fresh install with each of these driver switches between nVidia and ATI ? or using DDU at the very least ? because all of this sounds like something has gotten corrupted deep in the directX D3D driver stack if its crashing at the use of shaders but runs fine otherwise.

Since its XP this actually isn't unusual and back in the day switching between ATI and nVidia GPUs sometimes required a fresh install to clear directX oddities even if you used DDU as XP could corrupt registry and drivers with no indication till directx fell over.

If its a fresh install each time or DDU then I would be suspicious of the GPU or a controller downstream of it ..like the northbridge/southbridge overheating when the GPU starts demanding/sending more data over the bus when shaders kick in.

just my 2 cents.

Not exactly a fresh install, i have a sysprepped image of clean XP made in a VM which i restore. It does not have any GPU drivers installed and is fast to restore, so that's what i do for testing (on separate storage so that i do not mess up actual system i am using). I suppose i could try clean install from CD just to be 100% certain, but it is kind of a pain with all the updates and everything.

I do not like stuff like DDU and restoring image is not much slower anyway, so i do not use that.

SB/NB i did check with my ultra precise finger thermometer 😁
But seriously - both have heatsinks, do not get too hot to hold and i've tried throwing a fan on top just to be sure. Should not be that.

I've also got precisely the same results on different motherboards (all nforce2 though, which i probably need to rectify) with different CPUs, ram, PSUs. Which makes me think it has to be related to videocard/GPU or chipset compatibility at this point.

The only reason i noticed it works with 133Mhz FSB is that i was too lazy to take faster CPU from another board and thrown in slower one with 133Mhz FSB, assuming that for this testing it would not matter. And since it does work with 133Mhz FSB it makes me think software part should be fine.

Thanks for suggestions though.

Trashbytes wrote on 2025-06-06, 09:29:

AGP bus speed is linked to FSB through dividers, on most AGP 8X systems its locked to 66Mhz separate from the FSB so one shouldn't affect the other but not all boards can lock the AGP bus speed and as FSB increases so does the AGP bus speed, not many AGP cards will accept speeds above 66Mhz and ATI cards are pretty bad for this, nvidia fares better.

No idea about your board but you may want to do a bit of digging and see if your board is able to lock the AGP/CPU/DDR bus speeds independent from each other, if it cant that may explain the instability you are seeing in regards to higher bus speeds.

https://en.wikipedia.org/wiki/Accelerated_Graphics_Port

On this board i can set AGP frequency manually in BIOS. There is 50Mhz, 66Mhz, and then whole bunch of higher options in small steps. When on auto aida shows 66Mhz. I tried setting it to 66Mhz manually and to 50Mhz too. Did not change the behavior.

I also did not overclock the FSB. I set it to standard 133/166/200 values at which all bus speeds are normal. This are the values different CPUs run at normally, issues with overclocking bus speeds, memory, etc only happen when something in between is used like 150Mhz or something.

Memory can be set manually too, but nforce2 does not really like when memory frequency is different from FSB so it is set to 1:1. The memory is 200Mhz though and the timings are set manually, so it should be fine operating at lower frequency like 133 or 166Mhz. I ran memtest to verify too, a lot, since nforce2 is so picky with memory.

And having reread that... if feels like i am trying to sound like i am smart and know everything. Sorry about that. This really is not the case. Any of this suggestions are helpful and i do take them seriously and reconsider my assumptions. So if there are flaws in my logic here hearing them would be very useful.

Reply 31 of 49, by Trashbytes

User metadata
Rank Oldbie
Rank
Oldbie

Hrmmm its a 2600 Xt right . .which is a native PCIe card with an ATI AGP bridge chip ..which are notorious for being flaky as hell if at any point in their life they have been allowed to overheat. (ATI didnt really designed the cards to cool these bridge chips and the HD 2000 series cards ran hot)

Not saying its got a bad bridge chip but with all the issues with the card seem to be related to the AGP bus and how the card is communicating through it, I would check that chip myself see if its got enough cooling or if its getting hot when the GPU is under load.

Reply 32 of 49, by Archer57

User metadata
Rank Member
Rank
Member
Trashbytes wrote on 2025-06-06, 10:52:

Hrmmm its a 2600 Xt right . .which is a native PCIe card with an ATI AGP bridge chip ..which are notorious for being flaky as hell if at any point in their life they have been allowed to overheat. (ATI didnt really designed the cards to cool these bridge chips and the HD 2000 series cards ran hot)

Not saying its got a bad bridge chip but with all the issues with the card seem to be related to the AGP bus and how the card is communicating through it, I would check that chip myself see if its got enough cooling or if its getting hot when the GPU is under load.

Yep, the chip is on the other side of the board and has no cooling, typically for ATI/AMD cards. It also has no headspreader (bare die) and is covered by those typical for AMD pink stuff, presumably to protect small SMD components on chip substrate, which prevents installation of any heatsink.

This can not be measured by finger thermometer - way too hot, so i'd need to use a thermocouple, which might actually be a good idea. It does sink heat into PCB pretty effectively - whole PCB around it is getting hot. But if i had to draw on my experience touching 3d printer hotends and soldering irons i'd very roughly place it at least at 80C, probably more.

And with sapphire's cooler whole thing gets hot in general - GPU idles around 50C and whole card is... rather warm. Not sure what load temps are as it crashes, but it does increase its fan RPM substantially. Somebody already installed small heatsinks on memory chips on the backside of the card and those are getting quite warm too, memory chips on the front do contact main cooler through rather thick (probably 2mm) thermal pads. Those pads are in rough shape (and i probably need to replace them) and the fan has been drilled and lubed, by the looks of it multiple times. The fan is very obviously on its last legs, but it does function for now. When i first took the cooler apart it did have some dust in it, but not crazy amount and was not completely blocked. So there is hope whoever had the card did service it appropriately.

Whole heat situation (+sticky goop) actually made me suspect the caps, those are usually the part which dies from heat the fastest. I'll probably swap that cap which i highlighted before first, as it is easy enough (and i have plenty of donors), if that's really faulty i'll probably consider a full recap.

Not sure what i can do to verify the bridge chip though. It is hot and has no cooling, as designed. The card evidently worked for a while so it must be within spec. The only thing i could do is reflow it and see if that changes anything, but that's rather "not great" for the chip and similarly to GPU i'd prefer to only use that as the very last option.

Whole situation with this chip seems... rather weird. On nvidia cards this chip is usually cooled. On 7600GT and 7300GT i have from palit it contacts main cooler through thin aluminum plate and thin thermal pad. And it is on front side so it gets some airflow. Not the best cooling, but whole lot better than nothing. I also have gigabyte 6600 and there it has sizable separate heatsink which also gets some airflow and at idle it is warmer than main GPU heatsink. But then on ATI cards it is always on the backside with zero cooling. Did nvidia specifically ask for cooling or something? Why 3rd party manufacturers, which presumably at least partly design their own cards, treated this chips on nvidia and AMD/ATI cards so differently?

Reply 33 of 49, by Trashbytes

User metadata
Rank Oldbie
Rank
Oldbie

The issues with it getting hot is not that its working but that its actually working correctly as in not falling on its face as soon as its thrown under load. The fact its too hot to touch while at idle suggests to me that it needs some cooling or its already partially damaged as it shouldn't be getting that hot when not at load. I had a HIS IceQ 2600 XT AGP back in the day and I remember having to stick a heat sink onto that chip to keep it cool as it would start to flake out when gaming and cause system crashes, once I threw a heat sink at it it was a much more stable card. These ATI bridge chips are well known in the community for dying from heat, no idea what drugs the ATI engineers were on to not put a heatsink on them.

So don't take the fact ATI didn't put cooling on it as being fine . .ATI were idiots for inadequate cooling and the 9700 Pro and 9800 Pro cards are good examples of them being dumbasses about it, nVidia actually did put cooling on their chip and their bridge chip actually ran cooler then the one ATI used.

So do the old girl a huge favor and throw some cooling at the bridge chip and see if that helps, it cant hurt and will at the very least bring peace of mind that heat wont be killing it.

Reply 34 of 49, by Archer57

User metadata
Rank Member
Rank
Member
Trashbytes wrote on 2025-06-06, 12:20:

The issues with it getting hot is not that its working but that its actually working correctly as in not falling on its face as soon as its thrown under load. The fact its too hot to touch while at idle suggests to me that it needs some cooling or its already partially damaged as it shouldn't be getting that hot when not at load. I had a HIS IceQ 2600 XT AGP back in the day and I remember having to stick a heat sink onto that chip to keep it cool as it would start to flake out when gaming and cause system crashes, once I threw a heat sink at it it was a much more stable card. These ATI bridge chips are well known in the community for dying from heat, no idea what drugs the ATI engineers were on to not put a heatsink on them.

So don't take the fact ATI didn't put cooling on it as being fine . .ATI were idiots for inadequate cooling and the 9700 Pro and 9800 Pro cards are good examples of them being dumbasses about it, nVidia actually did put cooling on their chip and their bridge chip actually ran cooler then the one ATI used.

So do the old girl a huge favor and throw some cooling at the bridge chip and see if that helps, it cant hurt and will at the very least bring peace of mind that heat wont be killing it.

I would gladly do that, in fact i would have done it already if it was simple.

Substrate around the die is covered with... something, which mostly resembles duct tape (fabric of some sort) on top of thermal pad. It is significantly above the die, so i can not just put heatsink on top. If i remove that... stuff there will be SMD components on top of substrate and small bare die. Just gluing heatsink on top of that feels wrong and will likely lead to physical damage. There are no holes around (like on those 6600 for example) to mount a heatsink that way.

So how do i do it? How did you do it with your card, if you remember?

Reply 35 of 49, by Trashbytes

User metadata
Rank Oldbie
Rank
Oldbie

I simply removed just enough of the foam stuff they had around it to allow a small heatsink to be stuck tot he IC with double sided thermal tape, it really didn't need much messing around.

It should look something similar to this right ?

The attachment hd2600xt.jpg is no longer available

If so just carefully remove enough of the top of the foam to allow a small heatsink to be stuck to the IC, no need to worry about anything more than a bit of thermal tape here.

Reply 36 of 49, by Archer57

User metadata
Rank Member
Rank
Member

So i've just cut 1mm thick thermal pad to fit the die and used that + hot glue to temporary affix the heatsink without modifying anything yet.

I've also replaced 3 capacitors, the one in the picture above definitely was leaky, bulged from the bottom and completely dead. Other ones were fine, but since there were only 3 like this on the board i decided to replace all 3. This are a pain to desolder, especially after solder joints have been corroded by electrolyte...

Good thing - i did not break anything, bad thing - nothing changed.

I've also ran some games and yes, some of them do crash just like 3dmark does. Some work fine though and all work fine when FSB is set to 133. I've also seen some artifacting which could suggest bad things, but then it is gone at 133 FSB...

And i've gotten some... quite unimpressive test results:

The attachment 2600xt_01.jpg is no longer available
The attachment 2600xt_03.jpg is no longer available

Granted this is with 2000+/1666Mhz/12.5x133 Thoroughbred-B, because it does not work above 133 FSB....

I think i'll pause with this card for now, until i can find a different platform to test on. This might be a compatibility issue and i do not want to do anything destructive until i rule that out...

Reply 37 of 49, by AlexZ

User metadata
Rank Oldbie
Rank
Oldbie

It would probably be worth to have a cheap P4 or Athlon 754 with AGP for GPU testing. Both tend to be cheaper than Athlon XP. Can be usually bought very cheap as a complete PC. It is for this reason I plan to keep one Socket A and one 754 AGP board. I have enough parts to build new systems but they are not in my focus and thus lay unused in a drawer. When it comes to capabilities, plain motherboards with basic OC will do, those were bought as internet PCs and not used as heavily as high-end gaming PCs. I also tend to have very similar GPUs. I have GeForce 7600 GT but also GS. I have multiple variants of GeForce 9800 GT. Should there be a problem with one I can diagnose it quickly. Rather than have one high-end part I prefer to have multiple cheaper ones.

My eyes are now on AMD AM2 Phenom B3 stepping with hw TLB bug fix. Very low clock but about the same single thread performance as Athlon 64 X2 6000+ at 3Ghz. Inferior to Core2Quad.

Pentium III 900E, ECS P6BXT-A+, 384MB RAM, GeForce FX 5600 128MB, Voodoo 2 12MB, Yamaha SM718 ISA
Athlon 64 3400+, Gigabyte GA-K8NE, 2GB RAM, GeForce GTX 260 896MB, Sound Blaster Audigy 2 ZS
Phenom II X6 1100, Asus 990FX, 32GB RAM, GeForce GTX 980 Ti

Reply 38 of 49, by Archer57

User metadata
Rank Member
Rank
Member

Yeah, it would be incredibly easy to get a bunch of "junk computers" locally for nearly free and there is always a chance of finding something interesting inside, but most of the time they turn out to be garbage office or home "internet pc" as you've said. With absolutely nothing of value. Motherboards... can be useful, but they are going to have dead caps and also there are a lot of really, really bad boards not worth even bothering with. Like the one i dug up at work with fake AGP and pretty much anything with integrated graphics from that time period.

So i try to avoid that as it leads to accumulation of stuff i know i'll have trouble getting rid of, which likely is not going to be ever used. I'd much rather find the board i like in a decent state from one of those people who do all the digging and sell things that seem nice to "retro enthusiasts". Yeah, i'll have to pay ~$20-40 for something i otherwise could get for nearly free, but i'll avoid accumulating junk and playing lottery.

I also do not really want to build S478 or s754 system. I have 2 nearly redundant s462 systems already and those will be very similar in terms of capabilities. So those boards would be laying in a drawer which i'd really want to avoid. As much as it can be useful for troubleshooting it also takes space, which is limited.

So i'll probably be aiming for early S775, ideally with something like Pentium D 820/830/840. Or may be S939 with dual core and AGP. Yes, having AGP in such system is limiting and generally a bad idea, but i find that hardware more interesting and pci-e... is just too easy. I'd just end up sticking another GTX660 into that and end up with another overkill system...

Or may be S423, simply because of how weird and bad that platform was, though this may be too slow/old for what i need here...

Reply 39 of 49, by AlexZ

User metadata
Rank Oldbie
Rank
Oldbie

My Epox EP-8RDA+ (socket A, has SATA, unlike the version on theretroweb) as well as Asus CUBX (socket 370) come from such junk computers. Caps are visually ok. Maybe I was lucky but I didn't need to collect dozens of junk PCs.

Many of my boards are Gigabyte as they have nothing special, usually BIOS has fewer settings than MSI and people buy them for internet PCs. A weak PSU is a good sign. The Phenom I'm buying had just 300W PSU and DDR2 667 (not getting those useless parts), the board is a Gigabyte, but with decent OC and memory configuration in BIOS. The Asus CUBX had a Celeron CPU.

I sold EP-8K5A2+ cheap due to bad caps and a repair not paying off for me timewise. I didn't want to spend the time recapping it one by one so I let new owner do it.

I try to get two pieces of same/similar hardware. Therefore I got 2nd GeForce GTX 260 very cheap. Diagnosing issues is then very fast.

Pentium III 900E, ECS P6BXT-A+, 384MB RAM, GeForce FX 5600 128MB, Voodoo 2 12MB, Yamaha SM718 ISA
Athlon 64 3400+, Gigabyte GA-K8NE, 2GB RAM, GeForce GTX 260 896MB, Sound Blaster Audigy 2 ZS
Phenom II X6 1100, Asus 990FX, 32GB RAM, GeForce GTX 980 Ti