VOGONS


First post, by UCyborg

User metadata
Rank Oldbie
Rank
Oldbie

Came across these recently, thought I'd share:

https://www.gamepressure.com/newsroom/legenda … worlds-t/z07ebb
https://www.techspot.com/news/107918-john-car … ould-stave.html

Arthur Schopenhauer wrote:

A man can be himself only so long as he is alone; and if he does not love solitude, he will not love freedom; for it is only when he is alone that he is really free.

Reply 1 of 47, by gerry

User metadata
Rank l33t
Rank
l33t

it's an interesting thought experiment, one write posits "what if we stopped making CPUs" to which Carmack suggests we'd have incentive to optimise software

It is possible, following some natural and/or political event that interrupts the CPU industry enough to create a long term halt in production that doesn't also destroy "the rest" of society.

You can imagine a global economic and political reversion to constrained trade and vastly reduced manufacturing plus some events that make it impractical to re-start (or re-build) existing CPU infrastructure

Even a significant slow down could have a milder version of this effect, eg serious chip shortages

Reply 2 of 47, by BinaryDemon

User metadata
Rank Oldbie
Rank
Oldbie

I don’t like that computers are basically disposable now, and I’m super impressed when I see optimized software but realistically if it takes longer to optimize the software than to release newer, faster hardware- developers are always going to choose the easiest path.

Maybe AI could help with the optimization part before the platform is obsolete.

Reply 3 of 47, by RandomStranger

User metadata
Rank Oldbie
Rank
Oldbie
gerry wrote on 2025-05-27, 11:44:

it's an interesting thought experiment, one write posits "what if we stopped making CPUs" to which Carmack suggests we'd have incentive to optimise software

We lived that through from the introduction of Core2 to the introduction of the first gen Ryzen. There were progress, but so slow one could get through 6-8 years without upgrading the CPU and only overclocking it. And game graphics still improved. We went from Race Driver GRID to Dirt Rally and Mass Effect to The Witcher 3 over those years.

What's different now?
Yeah sloppy optimization is one problem, but also modern eye candy features are absolute performance hogs without adding really anything of value gameplay-wise or fundamentally improving the visuals. Like real time ray tracing can wipe out 25-40% of your frame rate and most of the times the difference in visuals is only marginal.

sreq.png retrogamer-s.png

Reply 4 of 47, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

Depends. Realtime raytracing lets you do realistic water refraction, and other refraction effects. (Think glass or crystal pillars, textured glass surfaces, etc)

Do those add gameplay value?

Water refraction might, because you'd have to correct for it when shooting something in the water, etc.

The 'refraction of clear solids' might also get gameplay value in realistic laser puzzles, but the game would have to explicitly make use of it.

Mostly, the optimization thing comes from MS's demands to use multiple layers of API abstraction, leading to a lot of bloat between the application and the driver/hardware.

Not much to be done about that from an app developer standpoint.

Reply 5 of 47, by swaaye

User metadata
Rank l33t++
Rank
l33t++

JC was making hardware obsolete with the best of them back in the day, even with supremely optimized code.

Actually, if games aren't much of a concern, hardware lasts incredibly long now. I have friends still rocking their Core 2 machines. Windows 11 wants you to upgrade of course, but Linux works quite well.

Last edited by swaaye on 2025-05-27, 17:20. Edited 1 time in total.

Reply 6 of 47, by Falcosoft

User metadata
Rank l33t
Rank
l33t
swaaye wrote on 2025-05-27, 17:06:

JC was making hardware obsolete with the best of them back in the day, even with supremely optimized code.

Yeah, low level (over?) optimized code is usually microarchitecture specific. Like in case of Quake 1/2 that were optimized specifically for the Pentium's pipelined FPU that made contemporary Cyrix CPUs obsoltete...

Website, Youtube
Falcosoft Soundfont Midi Player + Munt VSTi + BassMidi VSTi
VST Midi Driver Midi Mapper
x86 microarchitecture benchmark (MandelX)

Reply 7 of 47, by Shponglefan

User metadata
Rank l33t
Rank
l33t

Leaving aside games, a lot of modern applications (including OS) are full of bloat in part due to the continued building on top of older code bases. This is something that Casey Muratori (Molly Rocket) has ranted about on numerous occasions. Despite exponential increases in CPU performance, we haven't seen the equivalent in application performance increases. In some cases, performance declines as software becomes more bloated.

Pentium 4 Multi-OS Build
486 DX4-100 with 6 sound cards
486 DX-33 with 5 sound cards

Reply 8 of 47, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie

The articles aren't very accurate to what Carmack was saying. They want to limit compute when Carmack says it would limit forward innovation. Even ignoring that it's not a very interesting angle that software could be more optimized, everybody knows it and everybody's complained about it at some point. The specifics could be more interesting.

Reply 9 of 47, by keenmaster486

User metadata
Rank l33t
Rank
l33t

Well yeah. It's just a matter of incentives. Devs don't have to optimize, so they don't. There are no consequences for throwing it together in Electron, so that's what they do because it takes much less time and effort. If we were suddenly forced to make everything work on a Core 2 Duo, we would do our best and end up with something much faster than we have now. Seems kind of obvious to me.

You won't get miracles like Cyberpunk running on a 486, but there's definitely a lot of room for improvement.

World's foremost 486 enjoyer.

Reply 10 of 47, by davidrg

User metadata
Rank Member
Rank
Member
Shponglefan wrote on 2025-05-27, 17:58:

Leaving aside games, a lot of modern applications (including OS) are full of bloat in part due to the continued building on top of older code bases.

Building on top of older code bases isn't the problem - those older codebases are likely better optimised and more memory efficient than all the newer layers of code if for no other reason than the fact that there were less compute resources to waste 20 years ago.

The real source of bloat is....

keenmaster486 wrote on 2025-05-27, 19:00:

Well yeah. It's just a matter of incentives. Devs don't have to optimize, so they don't. There are no consequences for throwing it together in Electron, so that's what they do because it takes much less time and effort.

All high level languages trade some level of efficiency (whether it be CPU or memory) for improved developer efficiency. Time is money after all, so if you can implement a feature in half the time using a higher level langauge and the increased system requirements aren't a problem then the choice is obvious. But even among high level langauges and frameworks there are different levels. I remember when Java and .net were considered slow and bloated, but they're a model of efficiency when compared to something like Electron. IMO spinning up an entire web browser just to edit a text file is really going a bit far when it comes to wasting compute resources with wild abandon, not to mention the dependency nightmare that development platform inflicts on desktop apps that will probably in the long run kill developer efficiency just as badly as it kills resource efficiency

Reply 11 of 47, by UCyborg

User metadata
Rank Oldbie
Rank
Oldbie

One example, compare modern YouTube with Project VORAPIS. Now what does the modern layout really bring to the table besides reducing information density and being slow?

Arthur Schopenhauer wrote:

A man can be himself only so long as he is alone; and if he does not love solitude, he will not love freedom; for it is only when he is alone that he is really free.

Reply 12 of 47, by luckybob

User metadata
Rank l33t++
Rank
l33t++

LyU2L4S.jpeg

YOU'RE NOT MY SUPERVISOR! I can use reaction images if I want to!

It is a mistake to think you can solve any major problems just with potatoes.

Reply 13 of 47, by Shponglefan

User metadata
Rank l33t
Rank
l33t
davidrg wrote on 2025-05-27, 21:27:

Building on top of older code bases isn't the problem - those older codebases are likely better optimised and more memory efficient than all the newer layers of code if for no other reason than the fact that there were less compute resources to waste 20 years ago.

It can depend on how they were built. An older application developed for single-threaded performance or primarily CPU (e.g. lacking GPU acceleration) can have impacts on its performance down the road. I have used a number of art applications over the decades that have suffered from this, as newer versions can progressively less performant while trying to pack on new features.

Pentium 4 Multi-OS Build
486 DX4-100 with 6 sound cards
486 DX-33 with 5 sound cards

Reply 14 of 47, by myne

User metadata
Rank Oldbie
Rank
Oldbie
Falcosoft wrote on 2025-05-27, 17:19:
swaaye wrote on 2025-05-27, 17:06:

JC was making hardware obsolete with the best of them back in the day, even with supremely optimized code.

Yeah, low level (over?) optimized code is usually microarchitecture specific. Like in case of Quake 1/2 that were optimized specifically for the Pentium's pipelined FPU that made contemporary Cyrix CPUs obsoltete...

Back in the day Carmack and even Microsoft would have multiple code paths depending on feature support.

Eg Windows xp required an instruction from a pentium, but 2k and NT would use if it existed, or take the longer path if it didn't.
Chgcmp something like that. Someone patched xp to support 486s not that long ago.

One thing that might be interesting is how Ai affects compiler optimisation of code.
Compiling might be a bit of a circular process instead of one and done.
Ie, you hit compile, and there could be a dozen ways to get working code, and you basically just hope the compiler gets it right, but it lacks real understanding of which code is the most used.
The "main stream" as it were.
Presumably, someone will build an AI that profiles the code during runtime, and loops back to test recompiled variants before settling on a fixed binary. It should also make these alternate branches like avx2 vs no avx2 more automatic and optimal.

It matters because even though it doesn't really get much attention, processors spend a lot of, if not most of their time on logistics, not math.
Load this memory, save this memory, send this memory.

The biggest hitters in optimisation tend to be noticed in the logistics, rather than the selecting of the perfect algorithm (though they tend to go hand in hand).

It's a math factory with production lines. It is more efficient to load one piece of paper and do multiple calculations on it than to load a new page for every calculation.
If you have a good idea of which code is most used, and therefore which memory is most important, you can tweak the order of the code like a flowchart - where the chart is arranged around the main path with the lesser used branches off to the side.
Load the main path into cache, and it is super fast. Have the exact same process organised differently and it keeps having to load new parts, slowing the whole factory down.

I built:
Convert old ASUS ASC boardviews to KICAD PCB!
Re: A comprehensive guide to install and play MechWarrior 2 on new versions on Windows.
Dos+Windows 3.11+tcp+vbe_svga auto-install iso template
Script to backup Win9x\ME drivers from a working install
Re: The thing no one asked for: KICAD 440bx reference schematic

Reply 15 of 47, by Intel486dx33

User metadata
Rank l33t++
Rank
l33t++

What do you think the 1990's was running on ?
It was running on 1990's computers.
486 and Pentiums and RISC CPU's

It is the Pentium CPU that Dominated the Computer market.

I learned computers on the 486 and Pentium computers back when I was in school taking computer education classes.
We taught and learned everything so I thought but there is allot I missed.
I am surprised how much software is out there for the 486 and Pentium computers.
.

Microsoft, Apple, UNIX, Novell. DOS, Linux, C Programming, etc.

Back in 1990's we had Nation wide networks and Global networks too
The Nation wide networks were on ethernet but the Global networks were on Dialup.

We ran every type of software from Workstations, Servers, Web Servers, Databases, etc.

These computers were running the World.

Last edited by Intel486dx33 on 2025-05-28, 04:20. Edited 1 time in total.

Reply 16 of 47, by leileilol

User metadata
Rank l33t++
Rank
l33t++

carmack worship article based on someone's thread that happened to have him respond; no thanks. Besides, it was abrash and hook who optimized his quake games.

BinaryDemon wrote on 2025-05-27, 13:12:

Maybe AI could help with the optimization part before the platform is obsolete.

"AI" means technical debt. no don't try to make it cute and dub it as 'vibe coding' either. Did you know he defended that awful hallucinated quake2? That's the complete polar opposite of an optimized thing you can think of.

......anyway, how about those "lightweight" platform-regressive gui toolkits pushed around (qt) and chromiums/awesomiums/electrons!!!

apsosig.png
long live PCem

Reply 17 of 47, by myne

User metadata
Rank Oldbie
Rank
Oldbie
Intel486dx33 wrote on 2025-05-28, 04:00:

What do you think the 1990's was running on ?

who?

I built:
Convert old ASUS ASC boardviews to KICAD PCB!
Re: A comprehensive guide to install and play MechWarrior 2 on new versions on Windows.
Dos+Windows 3.11+tcp+vbe_svga auto-install iso template
Script to backup Win9x\ME drivers from a working install
Re: The thing no one asked for: KICAD 440bx reference schematic

Reply 18 of 47, by Falcosoft

User metadata
Rank l33t
Rank
l33t
myne wrote on 2025-05-28, 00:18:

Back in the day Carmack and even Microsoft would have multiple code paths depending on feature support.

Pipelined FPU was not a 'feature' but a microarchitecture implementation detail. There was no CPUID feature bit associated to it. You could detect the the presence of an FPU and Cyrix/AMD CPUs reported they had one.
Some FPU instructions like FXCH were virtually free on the Pentiums and Quake's code used these very extensively. But they were not free on any other x87 units. Of course id Software could write alternative code paths using a dispatcher based on e.g. CPU vendor string (like Intel compilers have done it for a long time). But in this case when the Athlon was released with an even superior pipelined FPU it would have used the inferior/slower code because of the 'unfair' dispatcher based on CPU vendor string detection...
BTW, some may argue that thanks to Quake 1/2 all later FPU implementations had to make the FXCH instruction virtually free 😀

Website, Youtube
Falcosoft Soundfont Midi Player + Munt VSTi + BassMidi VSTi
VST Midi Driver Midi Mapper
x86 microarchitecture benchmark (MandelX)

Reply 19 of 47, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
leileilol wrote on 2025-05-28, 04:09:

"AI" means technical debt. no don't try to make it cute and dub it as 'vibe coding' either. Did you know he defended that awful hallucinated quake2? That's the complete polar opposite of an optimized thing you can think of.

AI upscaling and frame generation mean modern games are playable on your obsolete hardware. Can't take AI out of this thought experiment any more than you can optimizing compilers.