VOGONS


First post, by Rikintosh

User metadata
Rank Member
Rank
Member

I remember that in 2007, 2008, there was that scandal of the nvidia chipsets with manufacturing defects in the solders used (but that was actually the fault of the company that manufactured them, which also manufactured for ATI/AMD and were also affected, but the fault fell only on nvidia).

Officially affected models were G8X, and G9X chipsets, but we know that this issue has been occurring since the geforce 6xxx era.

My big doubt is: Which chipsets were affected and which ones were revised? For example, I see some G96-2xx chipsets, and some G96-6xx chipsets. Is the 6xx the improved revision? Is there a chipset replacement guide to switch from "early" to "final" revision, which includes serial numbers and that sort of thing? Because I don't want to have the trouble to make the exchange, and after a few years I find out that I put another defective chipset in its place.

Take a look at my blog: http://rikintosh.blogspot.com
My Youtube channel: https://www.youtube.com/channel/UCfRUbxkBmEihBEkIK32Hilg

Reply 2 of 22, by Hoping

User metadata
Rank Oldbie
Rank
Oldbie

On Desktop, any GPU starting with the 6xxxx series until the 2xx series.
I think that even the FX5800 was already affected, or it was the very bad cooling it had.
Chipsets, anything newer than nforce 3.
On laptop gpus, 6xxx series until at least the 3xx series, I had a laptop with a Geforce 315m that never even reached 70C and it died anyway.
The 8xxx series was the worst by far, the G80 GPUs were made to fail. Only water cooled G80 or with an aftermarket cooler may be in good condition.
From my experience, only when the bumpgate affected hardware has a very good cooling that keeps it under 60C, there's hope that it will be in good condition and will last.
ATI also had problems, but their chipsets and GPUs were more resilient and less prone to failures, but I've seen a lot of HD 2400m GPUs failures back in the day, but I have an HD 2400m on an ASUS laptop still in perfect shape, it never goes over 50C, it's an exceptional case.
The first ATI cards failures I've seen were on a model of a Fujitsu laptop with a x1300 card, a customer came to the shop claiming that his laptop caught fire and burned the desk, all of that laptops that a very weak cooler for the GPU and every one of them failed.
The Radeon 9700 and 9800 also failed because the cooler had a design flaw.
I have an Alienware M17XR3 that I've got cheap, it came with a HD6990M and failed after fourteen months, because it was always over 60C even when not gaming, and when gaming it reached 90+C withing seconds, the worst cooling I've seen on expensive hardware. The previous owner wanted a gaming laptop, so it was useless for him.
I only have one motherboard, nforce 570 with a good heapipe and a good heatsink, and one 8800GT G92 that I reflowed and added powerful cooler years ago, but I never used it much.
So, with everything, cooling is the key and from my point of view, any GPU shouldn't go over 65C due to the difference between the expansion between the GPU and the PCB that over time can end up breaking the solder balls. I think that any BGA flip chip will fail because of overheating unless it has a good overheat control.
I always remember when people saw that their Geforce 6800 with the stock cooler reached 80+C very easily, and they always said, no problem the nvidia hardware is the best, and it is made to hold with more than that... well... it was the way it's meant to be played.
That's only my point of view based on my experience, and I may be wrong in more than one thing.

Reply 3 of 22, by Rikintosh

User metadata
Rank Member
Rank
Member
Hoping wrote on 2023-02-23, 21:52:
On Desktop, any GPU starting with the 6xxxx series until the 2xx series. I think that even the FX5800 was already affected, or i […]
Show full quote

On Desktop, any GPU starting with the 6xxxx series until the 2xx series.
I think that even the FX5800 was already affected, or it was the very bad cooling it had.
Chipsets, anything newer than nforce 3.
On laptop gpus, 6xxx series until at least the 3xx series, I had a laptop with a Geforce 315m that never even reached 70C and it died anyway.
The 8xxx series was the worst by far, the G80 GPUs were made to fail. Only water cooled G80 or with an aftermarket cooler may be in good condition.
From my experience, only when the bumpgate affected hardware has a very good cooling that keeps it under 60C, there's hope that it will be in good condition and will last.
ATI also had problems, but their chipsets and GPUs were more resilient and less prone to failures, but I've seen a lot of HD 2400m GPUs failures back in the day, but I have an HD 2400m on an ASUS laptop still in perfect shape, it never goes over 50C, it's an exceptional case.
The first ATI cards failures I've seen were on a model of a Fujitsu laptop with a x1300 card, a customer came to the shop claiming that his laptop caught fire and burned the desk, all of that laptops that a very weak cooler for the GPU and every one of them failed.
The Radeon 9700 and 9800 also failed because the cooler had a design flaw.
I have an Alienware M17XR3 that I've got cheap, it came with a HD6990M and failed after fourteen months, because it was always over 60C even when not gaming, and when gaming it reached 90+C withing seconds, the worst cooling I've seen on expensive hardware. The previous owner wanted a gaming laptop, so it was useless for him.
I only have one motherboard, nforce 570 with a good heapipe and a good heatsink, and one 8800GT G92 that I reflowed and added powerful cooler years ago, but I never used it much.
So, with everything, cooling is the key and from my point of view, any GPU shouldn't go over 65C due to the difference between the expansion between the GPU and the PCB that over time can end up breaking the solder balls. I think that any BGA flip chip will fail because of overheating unless it has a good overheat control.
I always remember when people saw that their Geforce 6800 with the stock cooler reached 80+C very easily, and they always said, no problem the nvidia hardware is the best, and it is made to hold with more than that... well... it was the way it's meant to be played.
That's only my point of view based on my experience, and I may be wrong in more than one thing.

The ati x### series also failed (although it was easier to fix, as most just failed the sphere that connects the chipset to the PCB) iMacs up to 2009 that used radeon hd 2400, 2600 also tended to fail.

The big problem with nvidia was that even if you removed the chip and replaced all of its solders, the chip would still fail because there were bad solders between the die and the chip.

My problem is that I know that nvidia fixed this (I believe in the g96 generation), they still failed, but because of soldering the chip to the pcb, not because of the soldering on the die.

I really like laptops from that era, and I want to get ready to start buying them now that they are very low in prices, before they start to get expensive, and I also want to replace the defective chips now because I can still find them in China, in a few years, they will likely have disappeared.

Take a look at my blog: http://rikintosh.blogspot.com
My Youtube channel: https://www.youtube.com/channel/UCfRUbxkBmEihBEkIK32Hilg

Reply 4 of 22, by PcBytes

User metadata
Rank Oldbie
Rank
Oldbie

My experience was mostly hassle free - chips that weren't reflowed before had a bigger tendency to live longer - most of the G86 from HP dv6000 and 9000 I reflowed enter this category - I have one dv6000 w/ 8400M GS, one Turion based dv9000 and two C2D+8600M GS based 9000s. Three needed reflow. After reflowing, all four live to this day - I got most of them during 2018-2020 so for a reflow to last 3-5 years, that's a big plus.

On the other hand, there are some that are too far gone. These were usually either non-reflowed chips that ALREADY showed screen twitches (something slightly similar to VCR tracking but happens very rarely) or were cooked to oblivion - the best contenders here were an ASUS RoG G70S which no amount of reflow could bring the joints together - die likely separated from glue and chip for good - and a Packard Bell MB65 which was already had 5 reflows on hand - it would work for roughly between 10 and 30 minutes, then crash and burn with violent artefacts.

I would have also added a Sony Vaio FZ21M, but that one seems to have still held up fairly well, requiring a reflow only maybe once or twice after a proper repaste.

So, it boils down to how much abuse the chip has seen as well - both reflow wise and thermally wise - if the chip has been mended (as in, repasted) regularily, chances are it will keep on going for an equally long period of time after reflowes. If the chip has already had a big count of reflows, it's a total lottery - you might get it to work just as long as a less abused chip, or it could fail within a week or even less.

The ones I've never had to reflow are an ASUS F8Se (not sure if I wrote the right model - it's a F8 series laptop for all I remember), and one of the Intel DV9000s that had both a FP reader and a nordic (either Danish or Norsk - I remember it having those Æ and Ø keys with different colors (blue and green, some had green and red) printed on) keyboard - that one just needed a repaste and it was good to go.

"Enter at your own peril, past the bolted door..."
Main PC: i5 3470, GB B75M-D3H, 16GB RAM, 2x1TB
98SE : P3 650, Soyo SY-6BA+IV, 384MB RAM, 80GB

Reply 5 of 22, by Hoping

User metadata
Rank Oldbie
Rank
Oldbie

I don't think NVIDIA fixed the problem with the G96s because we had a customer who bought a Gigabyte GTX 550TI and started having random problems with his rendering programs after only three months. Since I already saw Nvidia with a bad eye, I told my boss "it's an Nvidia, the problem is the graphics".
The test, Furmark... and 96C in less than a minute and after two minutes... dead.
Then after the warranty change... the new graphics card had suspicious white powder all over the PCB, so I told my boss, "Furmak" and imagine what happened, the "new" graphics card didn't last a minute.
The customer in question lived 500 km from us because he had moved after buying the computer if I remember correctly. So the solution was to replace the GTX 550 with an equivalent and even cheaper Radeon, I don't remember the model of the Radeon, unfortunately. Maybe it was more a failure of Gigabyte, not from Nvidia, who knows.
I don't know when Nvidia implemented the throttle feature on it's GPUs.
I have nothing against Nvidia, nor AMD, nor Intel, I have something against stealing my money and Nvidia caused great losses during that time, since for example I saw more than a hundred laptops with Nvidia graphics cards die after a short time no More than two years of use even if the owner was only using Office and the Internet.

Furthermore, I am not an expert and I do not know how to do a reballing nor do I have the tools, but even today, YouTube is full of videos in which a reballing solves the problem, I do not think that current GPUs fail like the previous ones, but I do think that thinking that 70C is normal and safe, it is not true at all because the adverse effects of the expansion of the materials can already occur and cause problems.
The big problem is that the public interprets it as normal and safe, but... if we go to server hardware we find that it is exceptional to see those high temperatures, even when it comes to high TDP and those high temperatures, a Xeon with a temperature higher than 50C with a low medium load, is a symptom of some problem, and a Xeon with a temperature of 70C or higher with high load is usually a symptom of cooling failure.
I don't have a lot of experience with server hardware, but what I've seen seems to work like this.

Personally, I only have, a pair of Pentium III 1000 on a Supermicro board that never go over 50C, a pair of Opterom 2384's on a Tyan board with an Nvidia chipset and the processors have never hit 50C either, and the chipset shows only warm to the touch and the temperature reported by the motherboard for the chipset is always between 30C and 40C even in summer.
Finally, I have a Xeon E5-1620v2 on a Fujitsu D3128-B25 motherboard with an Asetek AIO, and it hardly exceeds 40C in summer after hours of playing.

So I think that the key is cooling, even Geforce G80, if it has a very good cooling it will last a lot of time. But on laptops that's very unusual, I've had to mod or even make a new laptop heat sink to get the supposed max turbo frequency.
Example, an HP laptop with an A10-5570m APU that when it reached 77C it started to throttle to 1100Mhz instead of the standard 2500mhz, and it almost never entered on turbo mode. I've had to add a second heatpipe to the cooler, and now it enters in turbo mode easily and Hwinfo reports that the maximum frequency reaches 3587mhz when the supposed max turbo frequency is 3500mhz.
I also have examples of graphics cards not reaching 70C using only air cooling, but that's enough reading for now.

Last edited by Hoping on 2023-02-24, 17:15. Edited 1 time in total.

Reply 6 of 22, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

General consensus:
90nm - (GeForce 7 refresh, some chipsets, etc.)
80nm - all GeForce 8 series, especially G80, which was expected to work at 90C for prolonged periods of time.
65nm - GeForce 8 refresh (G92), GeForce 9 series and original GTX2xx series.
Some 55nm chips (apparently, "Bumpgate" was fixed somewhere mid-production of 55nm chips) - G92b, GT200b, etc.

You may also watch this video dedicated to PS3 RSX problem, which makes some rough estimation by years of production.

Now, it is interesting whether all, some or none of 110nm lithography is affected. Anecdotally, original Nforce 4 chipsets were 110nm and dropping like flies, but GeForce 6600, 6800GS seems to be mostly fine.
7800GT/GTX were also reported to have some issues, but that series and Nforce 4 chipsets usually had one common thing - really shitty coolers for their TDP, which will kill even a "healthy" chip.

Hoping wrote on 2023-02-23, 21:52:

On Desktop, any GPU starting with the 6xxxx series until the 2xx series.
I think that even the FX5800 was already affected, or it was the very bad cooling it had.

Most common problem of GeForce FX5800 stems from shitty Samsung BGA memory, which also plagued Radeon 9500/9600/9700/9800 series. GPU chips are fine.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 7 of 22, by Hoping

User metadata
Rank Oldbie
Rank
Oldbie
The Serpent Rider wrote on 2023-02-24, 15:11:
General consensus: 90nm - (GeForce 7 refresh, some chipsets, etc.) 80nm - all GeForce 8 series, especially G80, which was expect […]
Show full quote

General consensus:
90nm - (GeForce 7 refresh, some chipsets, etc.)
80nm - all GeForce 8 series, especially G80, which was expected to work at 90C for prolonged periods of time.
65nm - GeForce 8 refresh (G92), GeForce 9 series and original GTX2xx series.
Some 55nm chips (apparently, "Bumpgate" was fixed somewhere mid-production of 55nm chips) - G92b, GT200b, etc.

You may also watch this video dedicated to PS3 RSX problem, which makes some rough estimation by years of production.

Now, it is interesting whether all, some or none of 110nm lithography is affected. Anecdotally, original Nforce 4 chipsets were 110nm and dropping like flies, but GeForce 6600, 6800GS seems to be mostly fine.
7800GT/GTX were also reported to have some issues, but that series and Nforce 4 chipsets usually had one common thing - really shitty coolers for their TDP, which will kill even a "healthy" chip.

That video is maybe the best I've seen about this matter.
I'm from Europe, Nvidia got sued in the US but here, they've run away with all the money.
For me, it is clear that the only real fix for the bumpgate is to reduce the heat, it is obvious that the 40nm chip will produce less heat and will be happier with the same cooler used for the 90nm so.... I think that also the hardware assemblers like, Sony, Microsoft, Asus, Gigabyte, etc... are liable because they used bad cooling systems, you don't need months to see that your hardware is overheating. But as long as it last past the warranty period, who cares, there's planned obsolescence after all.
That video confirms my theory that if a bumpgate affected chip never goes past the 55C-60C degrees, it will last for a long of time I don't think that they were worried about their customers, and they never cared to fix the problem. They fixed the problem by chance, only because they advanced in the technology they used to make the chips.
If they cared for they customers, I wouldn't have a 8800 ultra, a 8800 320 GTS, a 7900gs, a 6800GT, a laptop with a 8400, two laptops with a 6600go, a laptop with a 315m, one motherboard with a nforce 4, two motherboards with a nforce 560 and one motherboard with a nforce 680, all then are in the "for parts" bin, and guess what, they weren't free.
I'm sure that there are a lot of people like me, from my point of view, this was worse than the capacitor plague because replacing capacitors is possible with basic tools, but replacing a defective GPU or chipset... is not so easy.
So, for bumgate affected hardware, improve the cooling as much as you can.
And the same for newer hardware, I've seen newer hardware die because of overheating, my personal absolute max limit is 70C and the warning limit is 60C. I think that I'm a bit obsessed with that, but I'm tired of losing hardware because of that.

Last edited by Hoping on 2023-02-24, 20:33. Edited 1 time in total.

Reply 8 of 22, by bogdanpaulb

User metadata
Rank Member
Rank
Member

Asus A7N8X-E Deluxe nForce2 MCP-T (south bridge) : have 2 with this issue and seen at least 2 in the 'wild' with this problem (all of the errors below at random):
- random crashes in the OS
- random keyboard error at post
- random overclocking error at post (without OC)
- random bios rom checksum error at post

Reply 9 of 22, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

NForce 2 chipset can't bumpgate. At all. It's not BGA. Both chips are connected to PCB through wires, like old CPUs in ceramic packaging. While chip external connection is indeed BGA, that's rarely an issue.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 10 of 22, by Rikintosh

User metadata
Rank Member
Rank
Member

I live in Brazil and it's very hot here.

In my experience:

Geforce 6xxx all have ball problems, but will work for a few years before showing the problem, except for the 6800 (there were 6800 that failed after 3 months of use, and that failure was on the die, I have one). Gf6200 desktop especially XFX fails after some years (2 or 3), with each reflow it would work for less time.

Geforce on HP Pavilion (especially 6100 and 8400) are total shit, lasted exactly the 1 year warranty. Here in Brazil there was a big collective lawsuit against HP, which forced them to make a recall (but they just changed the motherboards for others with the same shit defect, even today it is possible to find these defective notebooks for sale , nobody wants them)

8xxx desktop series (especially 8600GT) I saw a lot of people baking, and reflowing, but in my experience, I solved most of the problems I encountered just by changing capacitors, same for PS3 fat (nec tokin). The only ps3's I got that really needed reballing and/or chip swapping were launch editions, which usually had a playstation 2 processor on the motherboard. On the other hand, the xbox360 used an ATI chip, and in the vast majority reballing was not viable (the processor could also have soldering problems), I only saw a small improvement in the Jasper board, but still, in the last slim E models, there are some cases of problems with broken spheres.

8xxx series on laptops was a disaster for me. My first one was a 2007 macbook pro I think. It was a core2duo and 8600GT, I played bioshock 1 on it for a few days until it cooked everything. Apple is the most disgraceful manufacturer there is. They purposely make systems with terrible cooling (I believe due to planned obsolescence), I LITERALLY fried an egg on a G4 1.67Ghz powerbook. So I took it apart, polished the heatsink, and drilled two circular holes for the fans to get air in, closed the hole with that thing you put on windows and doors to keep bugs out, put on some decent thermal paste, and voyala! 15 to 20 degrees less on a PPC laptop.

I have an acer 8930G that uses a 9600M GT ddr3 MXM, it is already the 3rd video card that is installed in it. Previously twice had 9600M GT 1024 ddr2 which failed.

ATI 9xxx and X### on laptops: ALL without exception gave me problems. Until today I'm waiting for a chipset to appear on aliexpress for me to fix an Asus that I have great respect for, which has a radeon 9700. By the way, now I remembered, I only had a laptop with ATI 9xxx that I didn't have problems with (but it got very hot) , it was a Dell Inspiron XPS (1st gen) that had a Radeon 9800, and a P4EE, but I didn't keep it very long, the fan noise irritated me, I ended up selling it.

Even the Nintendo Wii had graphics issues with ATI (at least the ones I had)

On nvidia's fault: It's not entirely nvidia's fault. The engineers did a lot of things wrong in the SIM project, but a good part of the blame lies with the Chinese semiconductor manufacturer (I don't remember the name), at the same time, this manufacturer also made ATI chips, and ATI chips also failed. But the beginning of the avalanche started in 2003 with stupid environmental laws, which decided to remove lead from tin used. It took 15 years to perfect the pewter to what we use today which is a bit more robust (but still not strong enough).

Take a look at my blog: http://rikintosh.blogspot.com
My Youtube channel: https://www.youtube.com/channel/UCfRUbxkBmEihBEkIK32Hilg

Reply 11 of 22, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++
Hoping wrote on 2023-02-24, 17:14:

If they cared for they customers, I wouldn't have a 8800 ultra, a 8800 320 GTS, a 7900gs, a 6800GT, a laptop with a 8400, two laptops with a 6600go, a laptop with a 315m, one motherboard with a nforce 4, two motherboards with a nforce 560 and one motherboard with a nforce 680, all then are in the "for parts" bin, and guess what, they weren't free.

GeForce 6800GT was manufactured on the same node as the previous GeForce FX series - 130nm. Many work just fine to this day, despite having quite high TDP. They are quite robust, but obviously not unkillable. All chips with that design are doomed to die under heavy stress. "Bumpgate" chips just had especially short service life under such conditions.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 12 of 22, by bogdanpaulb

User metadata
Rank Member
Rank
Member
The Serpent Rider wrote on 2023-02-24, 22:11:

NForce 2 chipset can't bumpgate. At all. It's not BGA. Both chips are connected to PCB through wires, like old CPUs in ceramic packaging. While chip external connection is indeed BGA, that's rarely an issue.

Well , something is causing these symptoms . Probably a bad soldering job from the factory , they manifest at 'cold start' , pressing hard on the chip makes the flaw go away most of the times . After it gets hot , it's stable for OS usage/restarts . If you shut it down and leave it for ~ 12H/24H , it reappears . I've decided to reball them , as they are useless in this state (the one without caps it's a older purchase , around 15 years ago , used it for ~2 years and it started , the other one i got last year with the fault , both manifest the same with different cpu/ram/video/ide/sata/psu/caps ) .

Attachments

  • IMG_3327.JPG
    Filename
    IMG_3327.JPG
    File size
    1.68 MiB
    Views
    1497 views
    File license
    Fair use/fair dealing exception

Reply 13 of 22, by Rikintosh

User metadata
Rank Member
Rank
Member
bogdanpaulb wrote on 2023-02-24, 23:06:
The Serpent Rider wrote on 2023-02-24, 22:11:

NForce 2 chipset can't bumpgate. At all. It's not BGA. Both chips are connected to PCB through wires, like old CPUs in ceramic packaging. While chip external connection is indeed BGA, that's rarely an issue.

Well , something is causing these symptoms . Probably a bad soldering job from the factory , they manifest at 'cold start' , pressing hard on the chip makes the flaw go away most of the times . After it gets hot , it's stable for OS usage/restarts . If you shut it down and leave it for ~ 12H/24H , it reappears . I've decided to reball them , as they are useless in this state (the one without caps it's a older purchase , around 15 years ago , used it for ~2 years and it started , the other one i got last year with the fault , both manifest the same with different cpu/ram/video/ide/sata/psu/caps ) .

I had exactly the same problem/symptom, but with HP slimline computer motherboards. They were nforce chips, but these were prone to malfunctioning. Anyway, I discovered that the nforce received a heatsink with a very poor quality thermal compound, which transferred only 2mw of heat (or less), I believe that it starts to get old and starts to lose thermal properties drastically, making the chip overheat. This overheating when turned on, and rapid cooling when turned off, over several months, causes the stress to crack the solder balls. So I changed the spheres for spheres with lead in the composition, and I also changed the heatsink and thermal compound, so the computer returned to work perfectly for years

I also had an HP/Compaq, I don't remember the model, it was a dark blue computer with a silver front, amd athlon 462, ddr1, and it used an nforce chipset, it had a nice onboard geforce and two vga outputs. I found it in a recycle bin, and a simple reflow under nforce got it working again. I think I still have that computer tucked away in my stuff to this day.

Older computers (from 2003 onwards) are not free from bumpgate-like problems (by this I mean, it's not that the chip has a design problem, but they have sensitive solders, which crack with a little overheating, like a dirty cooler , or old thermal grease). I have stacks of old notebooks that I bought for ridiculous amounts or got for free, because they didn't work and they were too old, a good part came back to work with a simple reflow.

Now, pulling from memory, I can name a few that are typical for causing problems:

iBook G3 (upside down video chip) and G4 (especially nvidia fx and ati 9xxx)
Compaq Presario 1800 (video chip is upside down)
Imac G4 Sunflower
imac g5 with radeon
any Thinkpad with ATI Radeon (which has memory on top of the chip)
dell c8xx
Acer travelmate pentium 3 + ATI (any one)
Asus with pentium M + ATI/Nvidia (most)

Take a look at my blog: http://rikintosh.blogspot.com
My Youtube channel: https://www.youtube.com/channel/UCfRUbxkBmEihBEkIK32Hilg

Reply 14 of 22, by Hoping

User metadata
Rank Oldbie
Rank
Oldbie
Rikintosh wrote on 2023-02-25, 00:17:
Now, pulling from memory, I can name a few that are typical for causing problems: […]
Show full quote

Now, pulling from memory, I can name a few that are typical for causing problems:

iBook G3 (upside down video chip) and G4 (especially nvidia fx and ati 9xxx)
Compaq Presario 1800 (video chip is upside down)
Imac G4 Sunflower
imac g5 with radeon
any Thinkpad with ATI Radeon (which has memory on top of the chip)
dell c8xx
Acer travelmate pentium 3 + ATI (any one)
Asus with pentium M + ATI/Nvidia (most)

Very interesting info, I've had an Acer Pentium M laptop with a X300, and it also died because it had a cooler that could only be good for a 386 😉. I think that I've seen few ATI chips die because they were less common around here and more resilient, whatever the reason, and around here, the NVIDIA chipsets were almost omnipresent in the AMD based computers.
I also saw a lot of HP desktops with an Nvidia chipset die, I don't remember the chipset model, only that they had a GeForce 6150 IGP.
In the end, Nvidia lied telling that their chips could wist and temperatures of 70C and up, so the hardware vendors used inadequate coolers, and ATI did almost the same, but they were a bit more resilient or Nvidia lied more about the max TDP of their chips than ATI did, so the ATI chips usually had better cooling than the Nvidia ones, I think that was the case, and ATI chips tended to have better cooling than the Nvidia ones,
The lies about the max TDP of the GPU chips are still common nowadays, I think, because I see no other reason to explain why even the most expensive graphics cards reach temperatures close to 80C easily, because I think that the hardware manufactures want to be fair to their customers, I guess.
But I'll never understand why the hell the PCIE bridge present on the ATI AGP cards didn't have a cooler, it was always scorching hot to the touch. And the bridge on the Nvidia AGP cards always had a cooler.
I've seen a lot of ATI AGP cards die because of the PCIE bridge chip.

Reply 15 of 22, by Rikintosh

User metadata
Rank Member
Rank
Member
Hoping wrote on 2023-02-25, 11:54:
Very interesting info, I've had an Acer Pentium M laptop with a X300, and it also died because it had a cooler that could only b […]
Show full quote
Rikintosh wrote on 2023-02-25, 00:17:
Now, pulling from memory, I can name a few that are typical for causing problems: […]
Show full quote

Now, pulling from memory, I can name a few that are typical for causing problems:

iBook G3 (upside down video chip) and G4 (especially nvidia fx and ati 9xxx)
Compaq Presario 1800 (video chip is upside down)
Imac G4 Sunflower
imac g5 with radeon
any Thinkpad with ATI Radeon (which has memory on top of the chip)
dell c8xx
Acer travelmate pentium 3 + ATI (any one)
Asus with pentium M + ATI/Nvidia (most)

Very interesting info, I've had an Acer Pentium M laptop with a X300, and it also died because it had a cooler that could only be good for a 386 😉. I think that I've seen few ATI chips die because they were less common around here and more resilient, whatever the reason, and around here, the NVIDIA chipsets were almost omnipresent in the AMD based computers.
I also saw a lot of HP desktops with an Nvidia chipset die, I don't remember the chipset model, only that they had a GeForce 6150 IGP.
In the end, Nvidia lied telling that their chips could wist and temperatures of 70C and up, so the hardware vendors used inadequate coolers, and ATI did almost the same, but they were a bit more resilient or Nvidia lied more about the max TDP of their chips than ATI did, so the ATI chips usually had better cooling than the Nvidia ones, I think that was the case, and ATI chips tended to have better cooling than the Nvidia ones,
The lies about the max TDP of the GPU chips are still common nowadays, I think, because I see no other reason to explain why even the most expensive graphics cards reach temperatures close to 80C easily, because I think that the hardware manufactures want to be fair to their customers, I guess.
But I'll never understand why the hell the PCIE bridge present on the ATI AGP cards didn't have a cooler, it was always scorching hot to the touch. And the bridge on the Nvidia AGP cards always had a cooler.
I've seen a lot of ATI AGP cards die because of the PCIE bridge chip.

Some bridges were damaged when removing that thermal pad without due delicacy, because for some reason, it stuck to the small capacitors and pulled them out when cold. Resoldering those capacitors on the chip was an almost impossible task.

I think the biggest problem with the lack of heatsinks is leaving that die exposed. I have a 4650 agp that has a damaged die because the previous owner let something hit it. That chip is pretty hard to find

Personally, I always glued a small piece of copper under the die with sticky thermal grease, not only did it improve heat dissipation, it also protected a little better. I think they thought they could leave it without a heatsink, assuming the air from the cpu cooler would be enough to cool it down, but not all users had a standard cooler, especially those using a watercooler.

Take a look at my blog: http://rikintosh.blogspot.com
My Youtube channel: https://www.youtube.com/channel/UCfRUbxkBmEihBEkIK32Hilg

Reply 16 of 22, by Deksor

User metadata
Rank l33t
Rank
l33t

One thing I wonder, which gets dismissed on the PS3 video because for the PS3 it's not feasible :
is it possible to "fix" the video cards by using a much beefier aftermarket cooler ?
PS3's GPU underfill becomes useless at ~70°C. Like felix said here, it's not really possible to improve the cooling of the PS3 beyond what's already there
But here we're talking about PC hardware.
Could it be possible to get a still working series 8 card to always run below 70°C with a giant cooler and getting it working for far longer ?

Trying to identify old hardware ? Visit The retro web - Project's thread The Retro Web project - a stason.org/TH99 alternative

Reply 17 of 22, by cyclone3d

User metadata
Rank l33t++
Rank
l33t++

The first 8800 Ultra card I bought ran super hot even at idle. I replaced the cooler with a nice Zalman heatpipe cooler and also used a Zalman cooler for the VRMs and RAM though I did have to mod the VRM cooler to fit with the GPU cooler.

Runs very cool now.

My guess is that that original cooler was faulty.

I have a number of other 8800 Ultra cards I bought at almost the same time but have not even used them once.
I plan on making an 8800 Ultra SLI rig at some point.

The HP DV series laptops had major issues because of the absolute crap cooling. Most of them didn't even have a cooler for I think it was the GPU. All that was needed to help them to not die was to add a copper spacer between the GPU and the aluminum on the palmrest.

Yamaha modified setupds and drivers
Yamaha XG repository
YMF7x4 Guide
Aopen AW744L II SB-LINK

Reply 18 of 22, by Rikintosh

User metadata
Rank Member
Rank
Member
Deksor wrote on 2023-02-25, 22:03:
One thing I wonder, which gets dismissed on the PS3 video because for the PS3 it's not feasible : is it possible to "fix" the vi […]
Show full quote

One thing I wonder, which gets dismissed on the PS3 video because for the PS3 it's not feasible :
is it possible to "fix" the video cards by using a much beefier aftermarket cooler ?
PS3's GPU underfill becomes useless at ~70°C. Like felix said here, it's not really possible to improve the cooling of the PS3 beyond what's already there
But here we're talking about PC hardware.
Could it be possible to get a still working series 8 card to always run below 70°C with a giant cooler and getting it working for far longer ?

The ps3 has several ways to improve yes. I studied this for many years, because I like to assemble "gold" versions, which is a compilation of the best parts and mods, that's how I created a ps2 fat free of reader problems, and with a super silent cooler.

The ps3 had several different coolers, with varying numbers of blades on the propellers. But I think what you'll feel the most difference is doing delid, and adapting the whole system to use a heatsink made of copper. The original heatsink is very similar to a passive heatsink, and is made of aluminum, making a similar one in copper is expensive, but combined with the delid, you will see temperatures up to 20 degrees lower. The delid process, I usually protect everything around with enamel paint in a few layers, to prevent something metallic from creating a short. Unfortunately I had to sell my ps3 fat builds, but I have a modified super slim over 10 years old that has never broken down, and has always worked below 60.

If your fat is very old, it is possible to change the gpu, for a newer gpu produced in less "nm". But it is an artisanal process and VERY laborious, the system will not work after the change, you will need to program a chip that exists on the motherboard which is the brain of everything, to tell it how to start the new gpu.

Take a look at my blog: http://rikintosh.blogspot.com
My Youtube channel: https://www.youtube.com/channel/UCfRUbxkBmEihBEkIK32Hilg

Reply 19 of 22, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++
cyclone3d wrote on 2023-02-26, 02:00:

The first 8800 Ultra card I bought ran super hot even at idle. I replaced the cooler with a nice Zalman heatpipe cooler and also used a Zalman cooler for the VRMs and RAM though I did have to mod the VRM cooler to fit with the GPU cooler.

Runs very cool now.

My guess is that that original cooler was faulty.

High idle temperature is expected, because G80 cards have no energy saving features. Not sure, but different brandname 8800U probably had slightly different voltage set.

Deksor wrote on 2023-02-25, 22:03:

Could it be possible to get a still working series 8 card to always run below 70°C with a giant cooler and getting it working for far longer ?

Probably like 50-60C, because GPU die is big and possibly has multiple hotspots.

I must be some kind of standard: the anonymous gangbanger of the 21st century.