VOGONS


NV bumpgate lead-free solder debacle

Topic actions

Reply 20 of 56, by Scali

User metadata
Rank l33t
Rank
l33t
mockingbird wrote:

Hey, the truth is sometimes stranger than fiction. I didn't make this up.

No, you read it on the internet, someone else made it up 😀

mockingbird wrote:

And we're not talking about nVidia cards of that era failing after 3 years. We're talking about cards dropping like flies after several months of usage.

I have lost an 8800GTS320 and a 9800GTX+ to this, both lasted about 3 years.
The 8800GTX I have is still going strong.

mockingbird wrote:

Just look at consumer-submitted Newegg follow-up reviews of GeForce cards from that era to get a pretty good idea of just how long these cards lasted on average.

Yes, that's also the placebo effect of course.
A lot of problems may actually just be PEBKAC, but every bug or failure is automatically attributed to bumpgate.
I had the same thing with this driver bug: https://scalibq.wordpress.com/2013/12/01/nvid … -400500-series/
People continued to report bugs with later driver versions, which are NOT related to this issue. This issue was fixed in the drivers I mention. Their problems are probably because of a bad PSU, corrupt Windows installation or whatever. Still they blame it on this issue.

At any rate, it still doesn't have anything to do with the technical merits of the architecture. It's the architecture that I find interesting as a 3d graphics developer. I take it you're not interested in technology. I only see you ragging on nVidia for the failure rates.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 21 of 56, by mockingbird

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

No, you read it on the internet, someone else made it up 😀

Heh. I admire your adulation of and propensity for steadfastedly defending nVidia, but those electron microsocope pictures don't lie. Designing a chip is a two-step process. There's the theory, and then the engineering. I think nVidia's theory (that is to say, their programming) was sound. But I think something went terribly wrong in their engineering department during that time. And there's the old adage of "Never attribute to malice that which is adequately explained by stupidity", so no one knows for sure why they let it go on like that for so long.

I have lost an 8800GTS320 and a 9800GTX+ to this, both lasted about 3 years.
The 8800GTX I have is still going strong.

Ok, so like I said, my 2900XT outlasts your 9800GTX+, and it predates it by more than two years. So then that proves that the failure of your 9800GTX+ wasn't caused by unleaded solder, because as was already stated, the 2900XT also used unleaded solder.

A lot of problems may actually just be PEBKAC, but every bug or failure is automatically attributed to bumpgate.

It's true that the more power computers use, the more things can go wrong, which means that someone who used these high-wattage video cards and experienced problems could have had a lousy PSU or bad caps on the motherboard. But then that wouldn't explain the multitudes of pristine-looking Geforce G8x/G9x being sold today on eBay with a description of "as-is/not working".

At any rate, it still doesn't have anything to do with the technical merits of the architecture. It's the architecture that I find interesting as a 3d graphics developer. I take it you're not interested in technology. I only see you ragging on nVidia for the failure rates.

Again, nothing wrong with the architecture. But there's a bit more involved in producing a mass-market videocard than designing it on paper so-to-speak and running it on your in-house simulation programs.

mslrlv.png
(Decommissioned:)
7ivtic.png

Reply 22 of 56, by havli

User metadata
Rank Oldbie
Rank
Oldbie
mockingbird wrote:

My 2900XT OTOH is still fully operational. It's sitting in a drawer right now, but it's seen quite a bit of use and I'm sure it will outlast any 8800 out there.

Sorry... this claim is nonsense. All ROHS hardware is much more prone to failure. So we are talking stuff manufactured in 2006 and later. Nvidia / AMD or any other manufacturer doesn't really matter. If your 2900 XT is working and 8800 not, then you are just lucky. Here are few examples of my collection that are/were defective:

Radeon 9550 - not working in 3D, no change after reflow
Radeon 9600 XT - no image, reflow not tried yet
Radeon 9800 Pro - artifacts, reflow didn't help
GeForce FX 5800 Ultra - sometimes artifacting, sometimes working fine
GeForce PCX 5900 - sometimes artifacting, sometimes working fine
Radeon X300 - artifacts, working fine after reflow
GeForce 7600 GT - no image, no change after reflow
GeForce 7950 GX2 - artifacts, working fine after reflow
Radeon X1950 Pro - no image, working fine after reflow
Radeon X1950 Pro - not working in 3D, no change after reflow
GeForce 8800 GTX - artifacts, working fine after reflow
GeForce 8800 GTX - artifacts, working fine after reflow
GeForce 8800 GTS 320 - artifacts, working fine after reflow
GeForce 8800 GTS 640 - artifacts, working fine after reflow
GeForce 8800 GTS 512 - artifacts, working fine after reflow
Radeon HD 2600 XT - artifacts, working fine after reflow
Radeon HD 2900 XT - artifacts, working fine after reflow
Radeon HD 2900 GT - artifacts, working fine after reflow
GeForce 8600 GTS - no image, working fine after reflow
GeForce 8600 GT - no image, working fine after reflow

HW museum.cz - my collection of PC hardware

Reply 23 of 56, by Scali

User metadata
Rank l33t
Rank
l33t
mockingbird wrote:

Heh. I admire your admiration of and propensity for steadfastedly defending nVidia

I'm not defending nVidia at all. As I say, bumpgate is real, I lost two cards to it.
The sensationalist piece by SA however... that's a bit much.

mockingbird wrote:

but those electron microsocope pictures don't lie.

They don't? How do we know that those pictures are even from an nVidia chip? Or that they're even real? And even if they're real, how do we know the chip was not sabotaged on purpose in order to take that picture?
I've not seen anyone other than SA bring forth all these claims, nor verify them.

mockingbird wrote:

But I think something went terribly wrong in their engineering department during that time.

That part is obvious... But this SA-article goes way overboard.
It gives me the same bad taste in my mouth as the 'multiple via'-myth that surrounded GTX480.
As if nVidia wouldn't have known about multiple via's, and wouldn't have used them in the design of the GTX480, even though they had been around for years (and have been used by nVidia and others in earlier processes as well).
Sure, the part about via's failing is true. But just because via's fail doesn't mean that nVidia didn't know about multiple via's. Even multiple via's can fail.

As I pointed out back then: GTX580 doesn't have these issues, yet it is not a larger chip than GTX480. If nVidia would have had to add multiple via's to the chip in many areas, then the chip would have to grow to make room for these multiple via's.
Apparently the real story is more like TSMC improved the reliability of their via's by the time GTX580 got fabbed... and nVidia fattened up their via's in some crucial places, just to be safe.

mockingbird wrote:

And there's the old adage of "Never attribute to malice that which is adequately explained by stupidity", so no one knows for sure why they let it go on like that for so long.

One big reason for it is that nobody knew about it until the cards started failing, which was years later. Note that the 8800 is from 2006, and the first talk of 'bumpgate' arose around september 2008.
As I say, the chips tended to work fine for 2-3 years before they break down. Which is why we didn't fully see the problem until 2008.

mockingbird wrote:

Ok, so like I said, my 2900XT outlasts your 9800GTX+, and it predates it by more than two years. So then that proves that the failure of your 9800GTX+ wasn't caused by unleaded solder, because as I already said, the 2900XT also used unleaded solder.

Incorrect.
There are various types of unleaded solder. Aside from that, there are other factors involved that influence how vulnerable the solder joints are to thermal stress.
By your logic I could also take my 8800GTX as an example instead of your 2900XT. Also older, and also lead-free.

mockingbird wrote:

Again, nothing wrong with the architecture. But there's a bit more involved in producing a mass-market videocard than designing it on paper so-to-speak and running it on your in-house simulation programs.

Yes, and nVidia is the largest and most successful GPU manufacturer in the world.
Even they make mistakes every now and then. Doesn't mean they don't have a clue about what they're doing. Shit just happens, as they say.

I feel the same as with Intel's Pentium 4. Yes, the architecture was quite inefficient. But people started making all sorts of outrageous claims about how Intel didn't know anything, and how their manufacturing was bad etc.
Well, the first series of Core2 was built on the same 65nm process as the last Pentium 4/Ds. Nothing wrong with manufacturing at all!

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 24 of 56, by mockingbird

User metadata
Rank Oldbie
Rank
Oldbie
havli wrote:

Sorry... this claim is nonsense. All ROHS hardware is much more prone to failure. So we are talking stuff manufactured in 2006 and later. Nvidia / AMD or any other manufacturer doesn't really matter. If your 2900 XT is working and 8800 not, then you are just lucky. Here are few examples of my collection that are/were defective:

I didn't say RoHS solder isn't the cause of a lot of failure. What I am saying is that cards that do fail because of RoHS solder can be re-balled. nVidia chips that fail don't usually fail only because of RoHS solder, there is usually also something wrong with the chip.

Radeon 9550 - not working in 3D, no change after reflow
Radeon 9600 XT - no image, reflow not tried yet
Radeon 9800 Pro - artifacts, reflow didn't help

First of all, these cards did not use RoHS solder. Secondly, they most likely used electrolytic capacitors, which must be removed prior to an oven reflow. Not that a reflow would have helped here, because the problems were most likely the capacitors in the first place. These Radeon cards used Nichicon HC series capacitors which only have a 1000 hour rating at 105C. In hot systems, these capacitors would fail over time.

GeForce FX 5800 Ultra - sometimes artifacting, sometimes working fine
GeForce PCX 5900 - sometimes artifacting, sometimes working fine

Probably defective RAM. These cards are not RoHS AFAIK.

And as for your X19xx cards, I can't for the life of me explain why a reflow helped. I'm pretty sure they are not RoHS.

I should also once again mention that reflowing is a temporary solution, and each subsequent reflow gives an increasingly diminished return, as the solder becomes even more brittle, especially considering that most people do not bother to properly inject liquid flux underneath the BGA before reflowing.

Scali wrote:

I'm not defending nVidia at all. As I say, bumpgate is real, I lost two cards to it.
The sensationalist piece by SA however... that's a bit much.

Please point out exactly what is so sensationalist about SA's many exposés on the faulty nVidia hardware of that era.

They don't? How do we know that those pictures are even from an nVidia chip? Or that they're even real? And even if they're real, how do we know the chip was not sabotaged on purpose in order to take that picture?
I've not seen anyone other than SA bring forth all these claims, nor verify them.

SA has posted memorandum from nVidia themselves admitting at least to the problem of the underfill:
Nvidia changes desktop G86 for no reason

The PCN is dated May 22, 2008 on the bottom of pages 2-5, July 25 on the bottom of Page 1, and Page 6 is undated. The first big problem is that it is entitled “G86 Desktop Products” with a subtitle “Change Namics 8439-1 Underfill material to Hitachi 3730″. Above that there is “Product/Process Change Notice”, the usual NDA only disclaimer.

Remember how Nvidia swore up and down that desktop parts were flat out not affected? Remember how we said that all G84 and G86s were because they were the same ASIC? I guess they decided to change this underfill material to better color coordinate with the substrate hues, given the cost of testing, qualification and other work that needs to be done, you certainly wouldn’t want to change it for no good reason. The old one worked just fine, right? Not defective either, they said so. Then again, they said the problem was contained to HP as well.

Keep in mind that eventhough they did eventually address the issue of the underfill, the underlying problem of the bump placement and alloy combination of the bumps and pads were still never addressed until Fermi.

One big reason for it is that nobody knew about it until the cards started failing, which was years later. Note that the 8800 is from 2006, and the first talk of 'bumpgate' arose around september 2008.
As I say, the chips tended to work fine for 2-3 years before they break down. Which is why we didn't fully see the problem until 2008.

I beg to differ. Eventhough nVidia claimed that their defective chipsets were only limited to HP (which is not true), HP received many RMAs within the first year of purchase, and they saw fit to extend their typical one year warranties on laptops only to save face, and not just because the failures were so widespread.

Incorrect.
There are various types of unleaded solder. Aside from that, there are other factors involved that influence how vulnerable the solder joints are to thermal stress.
By your logic I could also take my 8800GTX as an example instead of your 2900XT. Also older, and also lead-free.

Incorrect.

From Rollback the Lead-Free Initiative:

Myth #7. The solution is SAC solder.
"Without the softening effect of lead, the SAC alloys are more brittle and more likely to crack under pressure. They don't wet well, requiring more active fluxes. They don't have a sharp eutectic, staying plastic over a larger range, allowing intermetallics to form and leading to voids. Their higher melting temperatures stress laminates and components, limiting choices and narrowing process windows. There are unpredictable long-term degradation mechanisms such as: 1) the Kirkendall Effect, in which copper migrates into tin, leaving voids, 2) tin whisker formation, and 3) tin pest, in which the tin turns into powder. "

-ALL- unleaded solders are greatly inferior to leaded solder, regardless of the alloy used. To state that some are universally better than others is erroneous. Some might be better than others for certain applications, depending on what you're trying to achieve, but then other simpler unleaded alloys might be superior to other unleaded alloys even if they cost less.

mslrlv.png
(Decommissioned:)
7ivtic.png

Reply 25 of 56, by Scali

User metadata
Rank l33t
Rank
l33t
mockingbird wrote:

Please point out exactly what is so sensationalist about SA's many exposés on the faulty nVidia hardware of that era.

You don't know what sensationalist means?
https://en.wikipedia.org/wiki/Sensationalism
It should be obvious in this and pretty much every other article on SA.
If you don't want to see it, that's your choice. I think most people will agree with me that SA is sensationalist.

mockingbird wrote:

SA has posted memorandum from nVidia themselves admitting at least to the problem of the underfill:
Nvidia changes desktop G86 for no reason

The PCN is dated May 22, 2008 on the bottom of pages 2-5, July 25 on the bottom of Page 1, and Page 6 is undated. The first big problem is that it is entitled “G86 Desktop Products” with a subtitle “Change Namics 8439-1 Underfill material to Hitachi 3730″. Above that there is “Product/Process Change Notice”, the usual NDA only disclaimer.

How is changing the underfill material admitting a problem? nVidia doesn't present the problem as the reason why they changed it, do they?
So it's all speculation on SA's behalf, presented as fact. In other words: sensationalism.

mockingbird wrote:

I beg to differ. Eventhough nVidia claimed that their defective chipsets were only limited to HP (which is not true), HP received many RMAs within the first year of purchase, and they saw fit to extend their typical one year warranties on laptops only to save face, and not just because the failures were so widespread.

If it was just HP, it could still be a problem on their side, such as poorly designed cooling.

mockingbird wrote:

-ALL- unleaded solders are greatly inferior to leaded solder, regardless of the alloy used. To state that some are universally better than others is erroneous. Some might be better than others for certain applications, depending on what you're trying to achieve, but then other simpler unleaded alloys might be superior to other unleaded alloys even if they cost less.

I didn't say some are universally better than others (nor did I say that any of them were as good as leaded solder, on the contrary). Basically you're saying the same as I am. There are various types of unleaded solder, some might be better than others for certain applications, eg usage in videocards.
Aside from that, different GPUs, PCBs, coolers and cases should not be left out of the equation either. So you can't just compare two videocards and draw conclusions like that.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 26 of 56, by obobskivich

User metadata
Rank l33t
Rank
l33t
candle_86 wrote:

whats the big deal with lead solder anyway, where kids eating video cards and getting sick? I mean really cmon

It's not really about "kids eating it" - it's about downstream processing of the equipment end-of-life. In other words, someone actually thinking about what happens to your GeForce or Pentium or iPad after it's no longer useful, or no longer even works, and has to go "away." There is no true "away" - it has to be sink'd somewhere, be it recycling, landfill, incinerator, whatever. And that sinking process doesn't end within your lifetime, or your children's lifetime, or their children's lifetime, etc. This stuff is here for the long haul in some form or another. The reasoning behind RoHS (and other laws/initiatives/etc) is to remove heavy metals (like lead) and some other listed substances from the waste-stream. The idea being that even IF the equipment were then sent to landfill (which it should not be, but that's probably still optimistic thinking in 2015), you don't have heavy metals leeching into the water-table and so forth as the equipment sits for the next million some years. Like with most policy changes, however, you also end up with an EXTREMELY vocal minority on either side of the debate, even long after policy-makers have made a decision.

Also it's worth remembering - RoHS is not an American initiative. It's from the EU. Other markets were largely dragged under RoHS because manufacturers didn't want to produce Europe-only SKUs and non-European SKUs of the same product. Within the United States, only California has passed laws that actually resemble RoHS.

mockingbird wrote:

That's a good question. I would imagine not, as I've personally seen an old high-end Dell Inspiron with a mobility Geforce 6 that worked well after many years, and I have a fanless Geforce 6200 which still works quite well (after a re-cap, that is).

And see that's what I'm thinking too, based on the 130nm manufacturing being (reportedly) the same as GeForce FX, and own my own observed low failure rate for FX and 6 series cards. Certainly it's something to think about though, as I have noticed that 6800U tends to run relatively warm even at idle. Then again, that card is something like 11 years old by now, so I figure if it was going to have conked out "early on" it would've hurried up and done it already... 🤣 🤣

mockingbird wrote:

First of all, these cards [Radeon 9 series] did not use RoHS solder. Secondly, they most likely used electrolytic capacitors, which must be removed prior to an oven reflow. Not that a reflow would have helped here, because the problems were most likely the capacitors in the first place. These Radeon cards used Nichicon HC series capacitors which only have a 1000 hour rating at 105C. In hot systems, these capacitors would fail over time.

I'm not sure if this (capacitor problems) is the entirety of the "R300 death issues" but I will say that over the years I've experienced a higher-than-expected rate of failure for R300 cards compared to other cards from that era (e.g. Radeon 9000, GeForce FX, etc), and heard the same from others too. Yes that's entirely anecdotal, and no I've never heard about anything as well-investigated as Bumpgate for R300, but it seems *something* is going on with them.

Reply 27 of 56, by mockingbird

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

How is changing the underfill material admitting a problem? nVidia doesn't present the problem as the reason why they changed it, do they?
So it's all speculation on SA's behalf, presented as fact. In other words: sensationalism.

The spec of the underfill they used isn't exactly a secret. It did not meet the requirements of the chip. I don't need to have a schematic of the die to firgure this out. It's as simple as measuring temperatures and then comparing them against the datasheets of the underfill.

If it was just HP, it could still be a problem on their side, such as poorly designed cooling.

So then why was nVidia paying HP for each motherboard they had to replace?

I didn't say some are universally better than others (nor did I say that any of them were as good as leaded solder, on the contrary). Basically you're saying the same as I am. There are various types of unleaded solder, some might be better than others for certain applications, eg usage in videocards.
Aside from that, different GPUs, PCBs, coolers and cases should not be left out of the equation either. So you can't just compare two videocards and draw conclusions like that.

Since the underlying problem with the nVidia cards wasn't the unleaded solder, I don't see why this is at all relevant in the first place. Unleaded solder didn't cause physical harm to the dies of the many defective nVidia cards, nor did it cause the chips to overheat. What you could say is that the increased thermal stresses due to the poor bump placement and bump/pad alloy combination accelerated the detrimental effects of unleaded solder. That means to say, they caused the unleaded solder to become brittle faster. Considering this however, had the cards used leaded solder, then their method of failure would have been more permanent, because it would then unquestionably be the die that had failed due to the bump/pad alloy and incorrect underfill.

mslrlv.png
(Decommissioned:)
7ivtic.png

Reply 28 of 56, by swaaye

User metadata
Rank l33t++
Rank
l33t++
obobskivich wrote:

I'm not sure if this (capacitor problems) is the entirety of the "R300 death issues" but I will say that over the years I've experienced a higher-than-expected rate of failure for R300 cards compared to other cards from that era (e.g. Radeon 9000, GeForce FX, etc), and heard the same from others too. Yes that's entirely anecdotal, and no I've never heard about anything as well-investigated as Bumpgate for R300, but it seems *something* is going on with them.

An interesting aspect of Radeon 9500/9700 is the heatsink doesn't actually make contact with the die. There's thermal phase change "wax" filling the gap. I don't know if it matters but that always seemed a bit strange for a rather hot chip.

The 9700s I've seen start artifacting were "fixed" with a little memory underclocking.

Reply 29 of 56, by Scali

User metadata
Rank l33t
Rank
l33t
mockingbird wrote:

The spec of the underfill they used isn't exactly a secret. It dis not meet the requirements of the chip. I don't need to have a schematic of the die to firgure this out. It's as simple as measuring temperatures and then comparing them against the datasheets of the underfill.

That's not the point. The point is that nVidia did not literally admit that the underfill is a problem.

mockingbird wrote:

So then why was nVidia paying HP for each motherboard they had to replace?

From what I recall, nVidia didn't pay until much later, when the problems were established, the causes were known, and nVidia was indeed to blame.

mockingbird wrote:

Since the underlying problem with the nVidia cards wasn't the unleaded solder

It wasn't? Where exactly do you have any proof that there was no problem with unleaded solder?
Also, how do you explain that if the problems aren't solder-related, that reflowing fixes these cards? At least in some cases, the problems are actually the memory chips that suffer from bad contacts. Which means that they have nothing to do with the choice of nVidia's underfill.

mockingbird wrote:

Unleaded solder didn't cause physical harm to the dies of the many defective nVidia cards

Again, where do you have proof of physical harm of dies? And how do you explain that reflowing fixes dies with physical damage?

Edit: Here's an interesting article, the plot thickens:
http://techreport.com/news/15720/chip-failure … esponds-at-last
Apparently the 'high lead'-myth comes from someone at AMD. And as nVidia aptly points out, high-lead is used on many devices, including AMD's own. Ouch!
But at least we now see where SA gets their info, or should I say dirt... From nVidia's biggest competitor.

Last edited by Scali on 2015-07-14, 20:53. Edited 1 time in total.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 30 of 56, by mockingbird

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

That's not the point. The point is that nVidia did not literally admit that the underfill is a problem.

They admitted it in that memoranda where they announce the change. They admit there that they have to move to an underfill that is better suited for high-temperature applications.

From what I recall, nVidia didn't pay until much later, when the problems were established, the causes were known, and nVidia was indeed to blame.

This is not the case. It didn't take long for the problems on the northbridge chipsets to manifest, and HP was receiving RMAs in droves.

It wasn't? Where exactly do you have any proof that there was no problem with unleaded solder?
Also, how do you explain that if the problems aren't solder-related, that reflowing fixes these cards? At least in some cases, the problems are actually the memory chips that suffer from bad contacts. Which means that they have nothing to do with the choice of nVidia's underfill.

Because people who re-ball motherboards with nVidia chips use NOS nVidia chips. This is evidence in itself. If the problem was only the unleaded solder, these chips would come back to life after a re-ball, which is not the case. More often than not, they need to be completely replaced. going back to your example of the Xbox 360 - this was never the case with the GPU. the GPUs were re-balled and then the system functioned well. And not only was the RAM on the 360 soldered with lead-free solder, so was the CPU which sat right next to the GPU. And I should mention that the 360 used a very early RoHS alloy.

Again, where do you have proof of physical harm of dies? And how do you explain that reflowing fixes dies with physical damage?

There are two types of damage that can occur within the die:

1) The bumps seperate from the pads. During a re-flow, the bumps re-connect with the pads (Apart from the chip reconnecting with the board). It's an unintended but good consequence. In this case, the chip can be 'rescued', but the chip will always be doomed, because while you can change the solder between the chip and the board to leaded, you can't change the alloy inside the chip for the bumps and the pads, so the chip will always exhibit the same failure, no matter how many times you reflow it, and even *after* re-balling it with leaded solder.

2) The die experiences physical damage because the underfill did not allow it to 'float' under high temperatures. In this case, the chip is as dead as a doornail and nothing can bring it back.

mslrlv.png
(Decommissioned:)
7ivtic.png

Reply 32 of 56, by obobskivich

User metadata
Rank l33t
Rank
l33t
swaaye wrote:

An interesting aspect of Radeon 9500/9700 is the heatsink doesn't actually make contact with the die. There's thermal phase change "wax" filling the gap. I don't know if it matters but that always seemed a bit strange for a rather hot chip.

The 9700s I've seen start artifacting were "fixed" with a little memory underclocking.

I've actually admittedly never seen a 9500 proper, but the 9700 (non-pro) I had had a replaced cooler (which performed better than the OEM at least), and still failed eventually (within 2-3 years of being new). My 9600 experienced the same artefacting and eventual total failure as well. WRT the memory, I'm reminded of the discussion we had about 6800/X800 and memory cooling. nVidia cards from this era (FX 5800 Ultra thru 6800 Ultra at least) tended to have ridiculous overkill for their memory cooling, while many ATi cards just had bare chips. Maybe that contributed something as well. 😕

Reply 33 of 56, by gerwin

User metadata
Rank l33t
Rank
l33t

The solder issue with the Nvidia 6xxx and 7xxx series did the impossible, it made me a ATI/AMD convert to this day. Two expensive fanless 'silentpipe' cards ran for about a year and died. Next an expensive laptop turned into a useless paperweight while I was travelling for work. I did drag it all the way back home.
Budget Geforce 6200 cards run fine, but they don't have any punch nor heat.

Last edited by gerwin on 2015-07-14, 21:00. Edited 1 time in total.

--> ISA Soundcard Overview // Doom MBF 2.04 // SetMul

Reply 34 of 56, by mockingbird

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

See my edit in the response above... It explains everything I guess.

The problem wasn't only that nVidia used high-lead, but that they mixed high-lead with eutectic.

And no one's saying that AMD doesn't have above-average failure rates too, but only that nVidia's failure rates were above and beyond what anybody should have expected.

mslrlv.png
(Decommissioned:)
7ivtic.png

Reply 35 of 56, by Scali

User metadata
Rank l33t
Rank
l33t
mockingbird wrote:

The problem wasn't only that nVidia used high-lead, but that they mixed high-lead with eutectic.

Geez, make up your mind already.

mockingbird wrote:

And no one's saying that AMD doesn't have above-average failure rates too, but only that nVidia's failure rates were above and beyond what anybody should have expected.

Nobody denied bumpgate. Just the crackpot theories around it, which apparently can be led back to AMD.
I'm done with this really. I can only guess at why you want to push AMD propaganda about ancient nVidia products in 2015... But I don't care for it at all.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 36 of 56, by mockingbird

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

Nobody denied bumpgate. Just the crackpot theories around it, which apparently can be led back to AMD.
I'm done with this really. I can only guess at why you want to push AMD propaganda about ancient nVidia products in 2015... But I don't care for it at all.

The thread that this break-off thread originated from was discussing old video cards, and someone mentioned that the G8x/G9x cards were superior to their ATI counterparts. I was only pointing out that the astronomically high failure rates of nVidia cards of that era should be taken into consideration when making the comparison.

mslrlv.png
(Decommissioned:)
7ivtic.png

Reply 37 of 56, by Scali

User metadata
Rank l33t
Rank
l33t
mockingbird wrote:

The thread that this break-off thread originated from was discussing old video cards, and someone mentioned that the G8x/G9x cards were superior to their ATI counterparts.

In terms of performance and features yes.

mockingbird wrote:

I was only pointing out that the astronomically high failure rates of nVidia cards of that era should be taken into consideration when making the comparison.

No, you were doing a LOT more than 'just pointing out'.
Also, it makes absolutely no sense to take failure rates into consideration when talking about performance or features of a given GPU. They are completely unrelated.

The short version would be:
"Yes, but bumpgate!"
-"Yes, we know"
Done.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 38 of 56, by havli

User metadata
Rank Oldbie
Rank
Oldbie
mockingbird wrote:

I was only pointing out that the astronomically high failure rates of nVidia cards of that era should be taken into consideration when making the comparison.

There is a reason for this - much more GF8 were sold back then. Radeon HD 2000 series wasn't very popular. Naturally because of that we have plenty of dead GF8...

HW museum.cz - my collection of PC hardware

Reply 39 of 56, by sliderider

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
I have an 8800GTX that still works. […]
Show full quote
mockingbird wrote:

Show me an 8800 still in operation today...

I have an 8800GTX that still works.

mockingbird wrote:

The nVidia chips were rushed to the market and they had defects in their engineering.

Incorrect.
The problem was that new RoHS regulations no longer allowed lead-based solder (see https://en.wikipedia.org/wiki/Soldering#Lead- … ronic_soldering). Not all lead-free solder replacements were as reliable. The problem is that the lead made the solder somewhat elastic, which means it could absorb some of the expansion or shrinking that occurs when the chips heat up and cool down. Some of the new solders would crack under these situations. The Xbox 360 RRoD is because of the same issue, and AMD cards also suffered the issue, although generally they had smaller GPUs, that didn't suffer as much from changes in temperature. The problem is, you don't really know if it's going to crack until it's been stress-tested for quite a while.
And it doesn't happen on all cards, such as my 8800GTX.
So the problem is not in the chips (and these cards can often be fixed by 'reflowing': heating up the solder so it reseats itself and the cracks are filled).

Anyway, all that is completely beside the point that the GPU itself was nothing short of groundbreaking, as was the R300 a few years earlier. Even today, GPUs from AMD and nVidia still closely resemble a lot of architectural features first introduced in the 8800.

Double incorrect. The problem was that the layers of the GPU separated due to a flaw in the manufacturing process, it had nothing to do with the solder. Apple had solder issues with the white G3 and G4 iBooks and had corrected that problem long before the Macbook Pro issue.