VOGONS


NV bumpgate lead-free solder debacle

Topic actions

Reply 40 of 56, by Scali

User metadata
Rank l33t
Rank
l33t
sliderider wrote:

Double incorrect. The problem was that the layers of the GPU separated due to a flaw in the manufacturing process, it had nothing to do with the solder.

Have any proof to back that up?
This is the first I heard of it.
Also, as pointed out earlier, reflowing won't fix broken dies. Yet there is plenty of proof around (some already posted earlier) that reflowing fixes these cards.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 41 of 56, by Scali

User metadata
Rank l33t
Rank
l33t
havli wrote:
mockingbird wrote:

I was only pointing out that the astronomically high failure rates of nVidia cards of that era should be taken into consideration when making the comparison.

There is a reason for this - much more GF8 were sold back then. Radeon HD 2000 series wasn't very popular. Naturally because of that we have plenty of dead GF8...

I'm not sure why we should care about failure rates anyway... We collect old hardware here, we're used to things breaking, right? Pretty much all the hardware we use is well past their warranty period, and often well beyond the 'best before' date as well.
There are many reasons why old hardware breaks. Bumpgate is just one of them. I think we all know the risks here of buying/using old hardware.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 42 of 56, by gerwin

User metadata
Rank l33t
Rank
l33t

I am still in a denial, that I tell myself I don't collect anything. Just trying things out.
For me reliability is king, and the Geforce 66xx / 68xx / 7xxx / 8xxx don't exist anymore. For others performance may be king, or something else, fine. 😀

--> ISA Soundcard Overview // Doom MBF 2.04 // SetMul

Reply 43 of 56, by mockingbird

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
Have any proof to back that up? This is the first I heard of it. Also, as pointed out earlier, reflowing won't fix broken dies. […]
Show full quote
sliderider wrote:

Double incorrect. The problem was that the layers of the GPU separated due to a flaw in the manufacturing process, it had nothing to do with the solder.

Have any proof to back that up?
This is the first I heard of it.
Also, as pointed out earlier, reflowing won't fix broken dies. Yet there is plenty of proof around (some already posted earlier) that reflowing fixes these cards.

Straight from the horse's mouth:

https://web.archive.org/web/20111231003609/ht … e.com/kb/TS2377

In July 2008, NVIDIA publicly acknowledged a higher than normal failure rate for some of their graphics processors due to a packaging defect. At that same time, NVIDIA assured Apple that Mac computers with these graphics processors were not affected. However, after an Apple-led investigation, Apple has determined that some MacBook Pro computers with the NVIDIA GeForce 8600M GT graphics processor may be affected. If the NVIDIA graphics processor in your MacBook Pro has failed, or fails within four years of the original date of purchase, a repair will be done free of charge, even if your MacBook Pro is out of warranty.

mslrlv.png
(Decommissioned:)
7ivtic.png

Reply 44 of 56, by PhilsComputerLab

User metadata
Rank l33t++
Rank
l33t++

I remember reading an article about the XBox 386 and the issues it had. The person in charge said that it was mainly because lack of social networking back in those days, that they got away fairly clean. Sure it cost them a fortune, but if something like this would happen these days, it would be game over. The Nvidia situation fells the same. People in the know, or who worked in the industry, all know the truth. But it never became a huge scandal because of lack of social networking back in the days.

YouTube, Facebook, Website

Reply 45 of 56, by ODwilly

User metadata
Rank l33t
Rank
l33t

Just figured I would add in my experience with the topic. I have had a 2x 8800 GT cards run fine for their entire lives in an matx HP case. They are now both running happily along in pc's as we speak. I have a Gateway 8800GTS that seems to run really well as well! They ran really hot until I replaced the thermal paste and pads a year back, but they now run around 80c rather than 110 😀 my bad experiences have been with the mobile variants. Around 6 people I know bought laptops that died due to the Nvidia design issues/solder issues.

Main pc: Asus ROG 17. R9 5900HX, RTX 3070m, 16gb ddr4 3200, 1tb NVME.
Retro PC: Soyo P4S Dragon, 3gb ddr 266, 120gb Maxtor, Geforce Fx 5950 Ultra, SB Live! 5.1

Reply 46 of 56, by PhilsComputerLab

User metadata
Rank l33t++
Rank
l33t++

What coolers are you using on those cards?

I'm a fan of these cheap eBay coolers, but I believe a 8800GT will be too much for them.

YouTube, Facebook, Website

Reply 47 of 56, by ODwilly

User metadata
Rank l33t
Rank
l33t
philscomputerlab wrote:

What coolers are you using on those cards?

I'm a fan of these cheap eBay coolers, but I believe a 8800GT will be too much for them.

The 8800GT's are running the stock single slot EVGA cooler while the 8800GTS is running the stock dual slot EVGA cooler. TBH I was really surprised what some Arctic Ceramique 2 and a good cleaning did to help cool them down, they went from burning the skin off your finger to just being slightly hot to the touch. Picked all of them up for less than $30 total. The only one I have left in my possession is the GTS, the two GT's found their way into some high-end dual core machines for a couple of friends. 😀 Now they can actually run Fallout 3 and some other Vista/late XP games fairly well.

Main pc: Asus ROG 17. R9 5900HX, RTX 3070m, 16gb ddr4 3200, 1tb NVME.
Retro PC: Soyo P4S Dragon, 3gb ddr 266, 120gb Maxtor, Geforce Fx 5950 Ultra, SB Live! 5.1

Reply 49 of 56, by GeorgeMan

User metadata
Rank Oldbie
Rank
Oldbie
philscomputerlab wrote:

I've got quite a few 8800 type cards. 8800 GT, 9600GT, 9800 GTS, GTX+. They all work fine.

I do remember an issue with notebook chips, but I believe it was 7 series that was affected. But 100% sure.

Oh those 8600M GT... Very high failure rate. I actually don't remember any friend with this particular mobile vga not having problems with it. Everyone eventually reflowed/replaced it or trashed the laptop 😜

Acer Helios Neo 16 | i7-13700HX | 64G DDR5 | RTX 4070M | 32" AOC 75Hz 2K IPS + 17" DEC CRT 1024x768 @ 85Hz
Win11 + Virtualization => Emudeck @consoles | pcem @DOS~Win95 | Virtualbox @Win98SE & softGPU | VMware @2K&XP | ΕΧΟDΟS

Reply 50 of 56, by nforce4max

User metadata
Rank l33t
Rank
l33t
mockingbird wrote:
When your chips are running at 100C and failing left and right, I don't think it's a question of them deciding to run them at th […]
Show full quote
Scali wrote:

You realize that the claim that nVidia doesn't know at what voltages they can run the chips that they themselves designed is rather far-fetched, right?

When your chips are running at 100C and failing left and right, I don't think it's a question of them deciding to run them at those voltages to "keep healthy margins". There was no margin at all at those voltages. Like I said, considering the poor engineering of the chips, they were being factory over-volted, and for no apparent reason.

Hey, the truth is sometimes stranger than fiction. I didn't make this up.

Again, this doesn't make sense.
You dismiss the technical merits of the architecture based on the fact that the reliability wasn't that great.

I think my point was very salient. Bitboys cards also had a lot of technical merit. In simulations, they outperformed everything else, and I'm sure that had they had some millionaire backers and not to mention some luck, they might have put out some pretty impressive silicon.

And we're not talking about nVidia cards of that era failing after 3 years. We're talking about cards dropping like flies after several months of usage. Just look at consumer-submitted Newegg follow-up reviews of GeForce cards from that era to get a pretty good idea of just how long these cards lasted on average.

And again, this wasn't limited to one series of cards. This took place over a span of many years, perhaps even up until the very last G9x silicon. And keep in mind that G9x silicon was still being sold even after Fermi was released. so while high-end Geforce 2xx cards were Fermi-based, lower-end 2xx models were simply re-badged G9x models, and were still being sold well into 2010.

Why can't people get their facts right about what generation cards are based on. Geforce 2xx isn't Fermi that was the 4xx and 5xx era, G200 era cards were basically G92 based that been improved slightly but not re-badged except for the GTS250 that was an inferior copy of the 9800GTX+.

On a far away planet reading your posts in the year 10,191.

Reply 51 of 56, by meljor

User metadata
Rank Oldbie
Rank
Oldbie

Not inferior, exactly the Same card ( with optional 1gb vs 512mb).

asus tx97-e, 233mmx, voodoo1, s3 virge ,sb16
asus p5a, k6-3+ @ 550mhz, voodoo2 12mb sli, gf2 gts, awe32
asus p3b-f, p3-700, voodoo3 3500TV agp, awe64
asus tusl2-c, p3-S 1,4ghz, voodoo5 5500, live!
asus a7n8x DL, barton cpu, 6800ultra, Voodoo3 pci, audigy1

Reply 52 of 56, by sliderider

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
Have any proof to back that up? This is the first I heard of it. Also, as pointed out earlier, reflowing won't fix broken dies. […]
Show full quote
sliderider wrote:

Double incorrect. The problem was that the layers of the GPU separated due to a flaw in the manufacturing process, it had nothing to do with the solder.

Have any proof to back that up?
This is the first I heard of it.
Also, as pointed out earlier, reflowing won't fix broken dies. Yet there is plenty of proof around (some already posted earlier) that reflowing fixes these cards.

http://www.theinquirer.net/inquirer/news/1028 … ia-g84-g86s-bad

"Both of these ASICs have a rather terminal problem with unnamed substrate or bumping material, and it is heat related."

"The official story is that it was a batch of end-of-life parts that used a different bonding/substrate process for only that batch."

"More than enough people tell us both the G84 and G86 use the same ASIC across the board, and no changes were made during their lives."

"When the process engineers pinged by the INQ picked themselves off the floor from laughing, they politely said that there is about zero chance that NV would change the assembly process or material set for a batch, much less an EOL part."

So it is an across the board problem with the materials used and not a solder related issue. As I said previously. Apple resolved the issues with lead free solder when the white G3 and G4 iBooks suffered massive failures from the brittle, lead-free solder cracking due to vibration and flexing of the motherboard.

Reply 54 of 56, by Logistics

User metadata
Rank Oldbie
Rank
Oldbie
mockingbird wrote:

I don't always make argumentative posts, but when I do, I begin using high-vocabulary!

I think I'm going to go make a condescending, obnoxious post about the engineering faults of Ford, and how they caused the exploding gas tank in the Pinto, which is of course why there are no running examples left in existence--they have all blown up. But I'll make sure to begin the post with little to no references so that everyone will imagine I am plagiarizing someone else from copy & pasted Google search results. THAT OUGHTTA GET THE JIMMIES RUSTLED!

Reply 55 of 56, by mockingbird

User metadata
Rank Oldbie
Rank
Oldbie

That's ok, you don't have to walk on eggshells with me, in fact I appreciate it! 😀

mslrlv.png
(Decommissioned:)
7ivtic.png

Reply 56 of 56, by Logistics

User metadata
Rank Oldbie
Rank
Oldbie

High-five, then! I like it when people have thick skin.