VOGONS

Common searches


Nvidia adaptergate

Topic actions

First post, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

As you may already know, all new, shiny and fancy RTX 4090 video cards are dropping like flies due to unforeseen problem with provided Nvidia 12VHPWR adapter, which is literally melting after prolonged use. It is unknown how big the scale of the problem for now, but reddit reports new victims everyday.

So what's your opinion? Nvidia incompetence, possible massive problem with the design (which will affect native 12VHPWR cables too) or just users foolishly not seating connector properly into 4090?

Last edited by The Serpent Rider on 2022-11-01, 01:10. Edited 3 times in total.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 1 of 150, by weedeewee

User metadata
Rank l33t
Rank
l33t

bus bars on the next generation of GFX cards !

Right to repair is fundamental. You own it, you're allowed to fix it.
How To Ask Questions The Smart Way
Do not ask Why !
https://www.vogonswiki.com/index.php/Serial_port

Reply 3 of 150, by DosFreak

User metadata
Rank l33t++
Rank
l33t++

As far as I know there are only 20 known cases. It appears that those are using a cheaply made adapter with 150V vs the correctly made with 300V cables. So if you are crazy enough to buy a GPU for that much money then make sure you check which one you have.

How To Ask Questions The Smart Way
Make your games work offline

Reply 4 of 150, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++
DosFreak wrote on 2022-11-01, 00:27:

It appears that those are using a cheaply made adapter with 150V vs the correctly made with 300V cables.

There were confirmed cases with 300V versions too. Personally, I highly doubt that female connectors are any different between them. Igor's lab findings are just additional points of failure, which could lead to issues on true 600W video card, but as of now didn't.

Last edited by The Serpent Rider on 2022-11-01, 01:06. Edited 3 times in total.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 6 of 150, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

"150V" and "300V" are assembled differently, so it's just convenient to call them like that.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 8 of 150, by mihai

User metadata
Rank Member
Rank
Member

nvidia is hard at work destroying consumer demand in video card space, through a combination of high prices, defective products and terrible SKU naming (4080 12 gb comes to mind). I am concerned that AMD will fight very hard to snatch defeat from the claws of victory and will launch an equally expensive line-up with RDNA 3.

Reply 9 of 150, by mockingbird

User metadata
Rank Oldbie
Rank
Oldbie
mihai wrote on 2022-11-01, 01:38:

nvidia is hard at work destroying consumer demand in video card space, through a combination of high prices, defective products and terrible SKU naming (4080 12 gb comes to mind). I am concerned that AMD will fight very hard to snatch defeat from the claws of victory and will launch an equally expensive line-up with RDNA 3.

I bought a BNIB 1660 today for $110USD. I think this is the best value card for an older system (Think AMD Zambezi - FX8100). The trick with nVidia is to allow all the fools to subsidize the hardware for you, and then snatch it up a couple of years later.

If you need a 4090, you're either very wealthy, or a spendthrift.

mslrlv.png
(Decommissioned:)
7ivtic.png

Reply 10 of 150, by darry

User metadata
Rank l33t++
Rank
l33t++
mockingbird wrote on 2022-11-01, 02:06:

If you need a 4090, you're either very wealthy, or a spendthrift.

And//or possibly have a quasi-fetichistic fixation on high tech space heaters . 😉

More high tech, more powerful and (hugely) more expensive than this https://www.amazon.ca/Honeywell-HCE100RCD1-Pe … r/dp/B00LXB1FWW .
What's not to like ? 😉

Reply 12 of 150, by ZellSF

User metadata
Rank l33t
Rank
l33t
mockingbird wrote on 2022-11-01, 02:06:

If you need a 4090, you're either very wealthy, or a spendthrift.

In terms of money people use on their hobbies, the RTX 4090 isn't that expensive.

In business uses, it's not expensive.

I'm not saying it isn't outrageously priced, I'm just saying you aren't necessarily wealthy if you are one. Like an iPhone.

Reply 13 of 150, by darry

User metadata
Rank l33t++
Rank
l33t++
Shagittarius wrote on 2022-11-01, 04:59:

I'll let you know if mine goes up in flames.

I am not wishing such an issue on anybody, to be clear. Nor am I judging anybody who actually wants and makes of use of the kind of performance that this thing provides .

I was mocking the, IMHO, bordering on absurd thermal enveloppe and power draw that this thing has.

That being said, if a connector type

-is planned/designed for that much current to be going through it
- is meant for use in such an enclosed space as a PC
- is implemented without dedicated thermal monitoring/cutoff at a potential high resistance point such as a connector (admittedly more of a card design issue, but still)
- is possibly (TBD if applicable, as speculation so far) designed in such a way that current can flow without the connector being properly and fully engaged thus greating a high resistance point

Something like what we are hearing about these days was bound to happen.

Whether this is triggered by a design issue, a manufacturing issue or a user installation issue, this kind of thing should not have happened.

i haven't checked, but if this is a connector that relies on exerting pressure for it to properly clip into place, IMHO, maybe some kind of detection mechanism should have been implemented (i.e. An additional recessed conductor that is only used to electrically detect that the connector is fully engaged. Maybe something equivalent or better is already in place, I don't know) for safety purposes.

Reply 14 of 150, by TrashPanda

User metadata
Rank l33t
Rank
l33t
darry wrote on 2022-11-01, 10:21:
I am not wishing such an issue on anybody, to be clear. Nor am I judging anybody who actually wants and makes of use of the kind […]
Show full quote
Shagittarius wrote on 2022-11-01, 04:59:

I'll let you know if mine goes up in flames.

I am not wishing such an issue on anybody, to be clear. Nor am I judging anybody who actually wants and makes of use of the kind of performance that this thing provides .

I was mocking the, IMHO, bordering on absurd thermal enveloppe and power draw that this thing has.

That being said, if a connector type

-is planned/designed for that much current to be going through it
- is meant for use in such an enclosed space as a PC
- is implemented without dedicated thermal monitoring/cutoff at a potential high resistance point such as a connector (admittedly more of a card design issue, but still)
- is possibly (TBD if applicable, as speculation so far) designed in such a way that current can flow without the connector being properly and fully engaged thus greating a high resistance point

Something like what we are hearing about these days was bound to happen.

Whether this is triggered by a design issue, a manufacturing issue or a user installation issue, this kind of thing should not have happened.

i haven't checked, but if this is a connector that relies on exerting pressure for it to properly clip into place, IMHO, maybe some kind of detection mechanism should have been implemented (i.e. An additional recessed conductor that is only used to electrically detect that the connector is fully engaged. Maybe something equivalent or better is already in place, I don't know) for safety purposes.

The main issue is that under full load for extended periods the connector is simply unable to dump the huge amounts of heat generated by the absurd power draw, not only that but where nVidia has the connector placed on the card also attributes to stupid amount of heat build up. Instead of placing the connector at the rear of the card mounted to the heat sink where it could get better cooling ...they located it on the side of the card near the VRM, it also has zero cooling as very little airflow passes over that area.

Simply put they went with looks and beauty over practicality and function and it bit them fair in the arse when their crazy power draw essentially caused the plastic insulation to melt. (The connector itself is poorly designed for handling the 500+ watts the card can draw with transient spikes)

Nvidia has done bone headed shit in the past in regards to its failure to correctly account for heat dissapation ..any one remember Fermi ? how about over heating 8000 Series GPUs.

Reply 15 of 150, by darry

User metadata
Rank l33t++
Rank
l33t++
TrashPanda wrote on 2022-11-01, 11:07:
The main issue is that under full load for extended periods the connector is simply unable to dump the huge amounts of heat gene […]
Show full quote
darry wrote on 2022-11-01, 10:21:
I am not wishing such an issue on anybody, to be clear. Nor am I judging anybody who actually wants and makes of use of the kind […]
Show full quote
Shagittarius wrote on 2022-11-01, 04:59:

I'll let you know if mine goes up in flames.

I am not wishing such an issue on anybody, to be clear. Nor am I judging anybody who actually wants and makes of use of the kind of performance that this thing provides .

I was mocking the, IMHO, bordering on absurd thermal enveloppe and power draw that this thing has.

That being said, if a connector type

-is planned/designed for that much current to be going through it
- is meant for use in such an enclosed space as a PC
- is implemented without dedicated thermal monitoring/cutoff at a potential high resistance point such as a connector (admittedly more of a card design issue, but still)
- is possibly (TBD if applicable, as speculation so far) designed in such a way that current can flow without the connector being properly and fully engaged thus greating a high resistance point

Something like what we are hearing about these days was bound to happen.

Whether this is triggered by a design issue, a manufacturing issue or a user installation issue, this kind of thing should not have happened.

i haven't checked, but if this is a connector that relies on exerting pressure for it to properly clip into place, IMHO, maybe some kind of detection mechanism should have been implemented (i.e. An additional recessed conductor that is only used to electrically detect that the connector is fully engaged. Maybe something equivalent or better is already in place, I don't know) for safety purposes.

The main issue is that under full load for extended periods the connector is simply unable to dump the huge amounts of heat generated by the absurd power draw, not only that but where nVidia has the connector placed on the card also attributes to stupid amount of heat build up. Instead of placing the connector at the rear of the card mounted to the heat sink where it could get better cooling ...they located it on the side of the card near the VRM, it also has zero cooling as very little airflow passes over that area.

Simply put they went with looks and beauty over practicality and function and it bit them fair in the arse when their crazy power draw essentially caused the plastic insulation to melt. (The connector itself is poorly designed for handling the 500+ watts the card can draw with transient spikes)

Nvidia has done bone headed shit in the past in regards to its failure to correctly account for heat dissapation ..any one remember Fermi ? how about over heating 8000 Series GPUs.

Thanks for that perspective.

Potential firestarting ability aside, IMHO, this is why I now always stay clear of the highest end GPU of a given series. They are always made to run at the limits of what card and chip manufacturers think they can get away it.

- Huge power draw
- Intense cooling needs
- Usually relatively hot when running "as designed"
- Noisy if air cooled
Result: often compromised lifespan due to the above

Then, even on well designed and made cards, when something goes wrong ( cooling solution degrades due to fan wear/failure, thermal paste degradation, dust, etc), the effect is compounded.

When the Geforce FX 5800 Ultra was born, it was laughable but, over time, the concept has basically become acceptable. IMHO, flagship GPU these days are doing a great job of keeping the "hot leafblower" spirit alive and well.

Reply 16 of 150, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++
darry wrote:

They are always made to run at the limits of what card and chip manufacturers think they can get away it.

Nvidia runs all their desktop chips at the limit, regardless of segmentation. That's why they've implemented shunt resistors, average power draw and complete negligence in OCP.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 17 of 150, by TrashPanda

User metadata
Rank l33t
Rank
l33t
darry wrote on 2022-11-01, 11:57:
Thanks for that perspective. […]
Show full quote
TrashPanda wrote on 2022-11-01, 11:07:
The main issue is that under full load for extended periods the connector is simply unable to dump the huge amounts of heat gene […]
Show full quote
darry wrote on 2022-11-01, 10:21:
I am not wishing such an issue on anybody, to be clear. Nor am I judging anybody who actually wants and makes of use of the kind […]
Show full quote

I am not wishing such an issue on anybody, to be clear. Nor am I judging anybody who actually wants and makes of use of the kind of performance that this thing provides .

I was mocking the, IMHO, bordering on absurd thermal enveloppe and power draw that this thing has.

That being said, if a connector type

-is planned/designed for that much current to be going through it
- is meant for use in such an enclosed space as a PC
- is implemented without dedicated thermal monitoring/cutoff at a potential high resistance point such as a connector (admittedly more of a card design issue, but still)
- is possibly (TBD if applicable, as speculation so far) designed in such a way that current can flow without the connector being properly and fully engaged thus greating a high resistance point

Something like what we are hearing about these days was bound to happen.

Whether this is triggered by a design issue, a manufacturing issue or a user installation issue, this kind of thing should not have happened.

i haven't checked, but if this is a connector that relies on exerting pressure for it to properly clip into place, IMHO, maybe some kind of detection mechanism should have been implemented (i.e. An additional recessed conductor that is only used to electrically detect that the connector is fully engaged. Maybe something equivalent or better is already in place, I don't know) for safety purposes.

The main issue is that under full load for extended periods the connector is simply unable to dump the huge amounts of heat generated by the absurd power draw, not only that but where nVidia has the connector placed on the card also attributes to stupid amount of heat build up. Instead of placing the connector at the rear of the card mounted to the heat sink where it could get better cooling ...they located it on the side of the card near the VRM, it also has zero cooling as very little airflow passes over that area.

Simply put they went with looks and beauty over practicality and function and it bit them fair in the arse when their crazy power draw essentially caused the plastic insulation to melt. (The connector itself is poorly designed for handling the 500+ watts the card can draw with transient spikes)

Nvidia has done bone headed shit in the past in regards to its failure to correctly account for heat dissapation ..any one remember Fermi ? how about over heating 8000 Series GPUs.

Thanks for that perspective.

Potential firestarting ability aside, IMHO, this is why I now always stay clear of the highest end GPU of a given series. They are always made to run at the limits of what card and chip manufacturers think they can get away it.

- Huge power draw
- Intense cooling needs
- Usually relatively hot when running "as designed"
- Noisy if air cooled
Result: often compromised lifespan due to the above

Then, even on well designed and made cards, when something goes wrong ( cooling solution degrades due to fan wear/failure, thermal paste degradation, dust, etc), the effect is compounded.

When the Geforce FX 5800 Ultra was born, it was laughable but, over time, the concept has basically become acceptable. IMHO, flagship GPU these days are doing a great job of keeping the "hot leafblower" spirit alive and well.

If you look at what AMD has done for their 7000 Radeons they increased power draw by ~50watts but doubled performance in raster and will be roughly a little faster than the RTX 3000 series for RT, they realize that GPU power draw is becoming a huge problem for GPUs so they employed MCM and chiplets to get around it, much like they did with Ryzen. Sure their RT isnt super amazing but nVidia does rely on DLSS a bit too much and the vast majority of RTX4000s power comes from DLSS 3 and the fake frames it adds to the output. (Fake frames are AI generated frames and look fucking terrible)

Nvidia wont have MCM till their RTX5000 cards as they are a bit behind AMD with that tech but even then I don't see them backing off the power draw, its how they brute force their tech being at the top.

Reply 18 of 150, by TrashPanda

User metadata
Rank l33t
Rank
l33t
The Serpent Rider wrote on 2022-11-01, 12:05:
darry wrote:

They are always made to run at the limits of what card and chip manufacturers think they can get away it.

Nvidia runs all their desktop chips at the limit, regardless of segmentation. That's why they've implemented shunt resistors, average power draw and complete negligence in OCP.

heh .. you know whats even more amusing .. RTX4000 doesn't have physical power limits built into the VRM like RTX2000 and RTX3000 have .. nVidia had to drop them to get the power draw the cards needed, so now its all done via the VBIOS.

Reply 19 of 150, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++
TrashPanda wrote:

doesn't have physical power limits built into the VRM like RTX2000 and RTX3000 have

Voltage controllers still have OCP, but they are disabled or set to ludicrous level since Fermi refresh days (GTX 5xx), which led to the infamous driver killing GTX 590. So nothing new here.

Last edited by The Serpent Rider on 2022-11-01, 12:33. Edited 1 time in total.

I must be some kind of standard: the anonymous gangbanger of the 21st century.