VOGONS

Common searches


Reply 40 of 49, by SirNickity

User metadata
Rank Oldbie
Rank
Oldbie

48V requires a DC-DC converter that is compromising on range vs. efficiency. Nothing is free. Regulating 48V to 1.0V or less for high-speed digital components (RAM, CPU, etc.) is an inefficient process compared to 5v->1V or even 12V->1V.

The reason it might make sense for server applications is because of high current demand. You have to move 12V at high amperage around somehow, and that requires thicker wire and thick pours of copper. The losses from compromising on conductor area can outweigh the conversion losses. That's not the case for single-CPU systems that run idle most of the time.

It makes sense for PoE applications due to line length. Higher voltage, lower current -- again, less loss over lo-o-o-o-o-ong runs of cable. OK for 15W, even for 30W. Now we're starting to talk about 90W per port (!) which is just ridiculous. Before long, people will be stringing their old incandescent lighting systems off of an Ethernet switch. I digress..

12V to a motherboard is fine. Asking peripheral manufacturers to start transitioning to single-rail power for drives would be fine. (Heck, drives are mostly an obsolete phenomenon anyway except for increasingly niche applications, like NAS, servers, etc.) Doing bulk 12V-to-5v conversion on a motherboard is just a dumb proposition*, except for tightly integrated systems where the load is minimal and known in advance. For one, it's just moving the problem from a device that is capable of doing the conversion in the most opportune way, to a device that is already crammed full of stuff and has the least flexibility in form factor to handle high-power conversion. I really hope that's not where this standard is headed, because it would just be asinine.

(* I want to be clear: By this, I mean it would be a dumb proposition to transition the standard in this way. I do not mean to imply commentary here was dumb for mentioning it as a potential interpretation of the news.)

I'm not really sure whether I expect Intel to know better at this point or not though. I really think they truly view the NUC as the next unit of computing, and "everything should look like this now." IMO, the industry giants (Intel, MS, and Apple) are infected with a tragic case of tunnel vision. It's maybe forgivable when your market is a smaller demographic of computing, but Intel and Microsoft especially have a greater obligation on account of their market share and so their actions reflect on a much larger cross-section of users.

Last edited by SirNickity on 2020-01-27, 22:25. Edited 1 time in total.

Reply 41 of 49, by Stiletto

User metadata
Rank l33t++
Rank
l33t++
brownk wrote on 2020-01-26, 04:39:

Btw, it seems my post is a dupe.

Mod, should you need to choose, plz remove mine.

Merged topics and moved to Milliways.

"I see a little silhouette-o of a man, Scaramouche, Scaramouche, will you
do the Fandango!" - Queen

Stiletto

Reply 42 of 49, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
SirNickity wrote on 2020-01-27, 18:56:

48V requires a DC-DC converter that is compromising on range vs. efficiency. Nothing is free. Regulating 48V to 1.0V or less for high-speed digital components (RAM, CPU, etc.) is an inefficient process compared to 5v->1V or even 12V->1V.

48V demands faster switching transistors where 12V input voltage will benefit more from conduction losses being lower. I'm not a power electronics component engineer, so I don't know what the MOSFET price/performance curve is like. Coils are also a big source of losses under switching, and reducing hysteresis and copper losses requires some differences in the design of the magnetics.

The unfortunate insanity is that we keep bodging auxiliary power connectors to our platforms to shoe horn more wattage in. Stepping the distribution voltage up a mild amount would address the issue without extra copper and with small penalties in efficiency and small changes in board layout. As I mentioned, conversion for compatibility could be done with mezzanine modules that jack into the ATX, CPU aux, PCIe aux, and peripheral connectors. Maybe there could be a pluggable standard so they jack right into the back of the PSU?

All hail the Great Capacitor Brand Finder

Reply 43 of 49, by luckybob

User metadata
Rank l33t
Rank
l33t

Yea, but 48/56v is such a standard voltage with enterprise battery backups, I'd be genuinely surprised to find out there isn't ready-made, purpose built units already in existence.

It is a mistake to think you can solve any major problems just with potatoes.

Reply 44 of 49, by Horun

User metadata
Rank l33t++
Rank
l33t++

Ok you guys are right. Had a brain cramp and was focusing on the PSU as a whole and not the ATX spec and motherboard power.

Hate posting a reply and then have to edit it because it made no sense 😁 First computer was an IBM 3270 workstation with CGA monitor. https://archive.org/details/@horun

Reply 45 of 49, by rmay635703

User metadata
Rank Oldbie
Rank
Oldbie

The question is why call this new thing ATX?

A unified voltage PSU may as well have a bullet connector or worse an external supply.

I’m imagining a humming $5 space heater inside the case.

What’s even better is led computer lights will either grow in cost or become incandescent .

Seems like a big step backwards but at least I can now use the 600 watt supply to jump start my car.

Reply 46 of 49, by Jo22

User metadata
Rank l33t++
Rank
l33t++

That's a good question, I think.
As far as I can remember, Intel made something different last time and called it BTX.
It was not only related to PSUs, but also affected the whole physical design of PC motherboards and chassis.
Unfortunately, it was not accepted by PC industry. Which saddens me a tiny bit.
With BTX, the PCI and AGP cards were finally rotated, so the components would face upwards,
just the way they used to in the ISA card era.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 47 of 49, by SirNickity

User metadata
Rank Oldbie
Rank
Oldbie

I think that's mainly because people are still trying to "set standards" like it's 1980 and they're IBM. There's a whole lot of inertia right now, and sweeping changes aren't going to work unless there's a really dramatic improvement. If you pull the rug out from under the industry, the industry will just ignore you and somebody else will step in to take the lead, with the promise not to upset the apple cart.

BTX had some really good ideas, but Intel was still in the mindset of the days when we transitioned from AT to ATX. That was a long time ago, and the industry was a lot smaller then. We hadn't even totally gotten accustomed to things being compatible beyond one generation yet. That coup wouldn't have worked today either. Somebody would have added a control connector to the AT PSU that provided soft-off capability and called it a day.

With the 12V PSU thing, it wouldn't be impossible to shift to a single-rail PSU. And that could mean that smaller, low-power PCs could become powered by external DC input, without having to be modified in a proprietary way -- which is great. But several things have to happen first.

For one, we've trained all the component vendors to start using lower-voltage rails. SATA and its somewhat ineffective push to 3.3V, or how just about any 3.5" floppy or ZIP drive ran off of 5V and ignored 12V altogether. So that trend would have to be reversed to favor the 12V rail and regulate downward -- which is going to affect (to some degree) the form factor of 2.5" and smaller drives. (But again, as mechanical drives go the way of the dodo anyway, this is less of an issue.)

Next, the spec is going to have to be very lenient on the 12V rail's actual voltage. Otherwise, you still need a PSU that can guarantee well-regulated, relatively ripple-free, noise-free power.

A few years down the line, when "stuff that still uses 5V" is considered legacy, then that transition can occur. There would be a cottage industry of compatibility products, like 12V-in, 12V + 5V out adapters ala Molex to SATA (but with active components), etc.

And there's no reason someone couldn't market 48V PSUs that convert to 12V internally for in-system distribution. It could be done by switching through a transformer or by direct DC-to-DC conversion, whichever topology makes the most sense for cost / efficiency / load / size constraints. Heck, that could happen now. I think the only reason it doesn't happen (outside of industrial environments with DC power) is that bulk DC power isn't terribly common, there are no typical connectors for consumer use, and so on. Plus AC/DC conversion has gotten efficient enough that only die-hard telecom shops still want to deal with battery plants. Maybe very large scale data centers as well, where single-digit percentage efficiency gains are worth the overhead of custom engineering.

Reply 48 of 49, by luckybob

User metadata
Rank l33t
Rank
l33t

I actually have a pentium pro server with removeable PSU blocks.

one set of blocks runs on 120v AC
the 2nd set run on 48v DC.

its quite cool.

It is a mistake to think you can solve any major problems just with potatoes.

Reply 49 of 49, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

Unfortunately, it was not accepted by PC industry. Which saddens me a tiny bit.

You shouldn't be. It was rushed and horrible.

I must be some kind of standard: the anonymous gangbanger of the 21st century.