VOGONS


Reply 20 of 47, by myne

User metadata
Rank Oldbie
Rank
Oldbie

Back in my day, we rendered frames at 4x resolution and downsampled them. We called it antialiasing.

Now we render at 1/4 and apply what is most likely simply a standard, old stretching algorithm add "Ai" to the name and sell it as a bonus.

The world really does get flipped upside down sometimes.

I built:
Convert old ASUS ASC boardviews to KICAD PCB!
Re: A comprehensive guide to install and play MechWarrior 2 on new versions on Windows.
Dos+Windows 3.11+tcp+vbe_svga auto-install iso template
Script to backup Win9x\ME drivers from a working install
Re: The thing no one asked for: KICAD 440bx reference schematic

Reply 21 of 47, by gerry

User metadata
Rank l33t
Rank
l33t
RandomStranger wrote on 2025-05-27, 16:08:

What's different now?
Yeah sloppy optimization is one problem, but also modern eye candy features are absolute performance hogs without adding really anything of value gameplay-wise or fundamentally improving the visuals. Like real time ray tracing can wipe out 25-40% of your frame rate and most of the times the difference in visuals is only marginal.

aside from good points made by wierd_w regarding specific use cases I generally don't mind if, for example, water is modelled as accurately as possible or as a kind of set of animations. The game matters. Same with things like trees and actually most environment features - they don't have to adhere to actual physics in order for the game to be as good as possible, it's fun to see that level of modelling but its not necessary. For game dev that raises another point, most games are built on layers of libraries, I'm sure they are often well programmed and so on - but there are layers of them, the likelihood of hardware specific optimisation is low, it's all interfaces and libraries

Reply 22 of 47, by gerry

User metadata
Rank l33t
Rank
l33t
davidrg wrote on 2025-05-27, 21:27:

All high level languages trade some level of efficiency (whether it be CPU or memory) for improved developer efficiency. Time is money after all, so if you can implement a feature in half the time using a higher level langauge and the increased system requirements aren't a problem then the choice is obvious. But even among high level langauges and frameworks there are different levels. I remember when Java and .net were considered slow and bloated, but they're a model of efficiency when compared to something like Electron. IMO spinning up an entire web browser just to edit a text file is really going a bit far when it comes to wasting compute resources with wild abandon, not to mention the dependency nightmare that development platform inflicts on desktop apps that will probably in the long run kill developer efficiency just as badly as it kills resource efficiency

that's about it really, just implementing something like garbage collection has a cost. Now there are languages that invoke huge instruction cycles in seemingly elegant one line calls to a library function, it makes development faster but keeps optimisation at arms length. That's not to say any given dev could optimise better than a library function, but some could - especially where only part of the function is needed for instance

Reply 23 of 47, by GemCookie

User metadata
Rank Member
Rank
Member
vvbee wrote on 2025-05-28, 06:55:

AI upscaling and frame generation mean modern games are playable on your obsolete hardware. Can't take AI out of this thought experiment any more than you can optimizing compilers.

How so, if it's only available on video cards from the past 3 years?

Gigabyte GA-8I915P Duo Pro | P4 530J | GF 6600 | 2GiB | 120G HDD | 2k/Vista/10
MSI MS-5169 | K6-2/350 | TNT2 M64 | 384MiB | 120G HDD | DR-/MS-DOS/NT/2k/XP/Ubuntu
Dell Precision M6400 | C2D T9600 | FX 2700M | 16GiB | 128G SSD | 2k/Vista/11/Arch/OBSD

Reply 24 of 47, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
GemCookie wrote on 2025-05-28, 08:26:
vvbee wrote on 2025-05-28, 06:55:

AI upscaling and frame generation mean modern games are playable on your obsolete hardware. Can't take AI out of this thought experiment any more than you can optimizing compilers.

How so, if it's only available on video cards from the past 3 years?

In five years you're saying it's only eight years back, how much are you willing to give up? That's why it's a "fun thought experiment" and nothing more. Obviously AI will be the one writing code going forward and your hardware will come along making that and many other things possible.

Reply 25 of 47, by amadeus777999

User metadata
Rank Oldbie
Rank
Oldbie

I have always enjoyed Carmack's thoughts and tinkerings - lovely memories of .plan files and 90ies hardware.

Reply 26 of 47, by marxveix

User metadata
Rank Oldbie
Rank
Oldbie

Every new windows os has more bloatware, so much unneeded stuff included, forced to have.

30+ MiniGL/OpenGL Win9x files for all Rage3 cards: Re: ATi RagePro OpenGL files

Reply 27 of 47, by gerry

User metadata
Rank l33t
Rank
l33t
vvbee wrote on 2025-05-28, 08:52:

Obviously AI will be the one writing code going forward and your hardware will come along making that and many other things possible.

that might happen for a while, but why write code that needs to be run when the generative AI can just 'generate' a game on the fly as an interface. Then we'll never know what's happening under the hood - not for games, text, images, movies - nothing at all

Reply 28 of 47, by BinaryDemon

User metadata
Rank Oldbie
Rank
Oldbie
leileilol wrote on 2025-05-28, 04:09:

"AI" means technical debt. no don't try to make it cute and dub it as 'vibe coding' either. Did you know he defended that awful hallucinated quake2? That's the complete polar opposite of an optimized thing you can think of.

Not sure exactly what you meant so I’ll explain, I would guess AI could brute force test a thousand different hardware configurations and optimize code paths for each, as opposed to coders now who might have a few optimized CPU code paths and probably maybe 5 for GPU renderer options.

Reply 29 of 47, by bakemono

User metadata
Rank Oldbie
Rank
Oldbie

Of course the world could run on older hardware. It already did, before the newer hardware arrived. It's a big mistake to assume that software is a means to an end, rather than its own end. Hardware constraints won't lead to optimization of the latter. Surveillance phones have strict size and power constraints compared to real computers, but that hasn't led to more optimized software. Instead it led to an explosion of the worst bloatware ever seen. My employer's 'app' is around 70MB compressed, uses about 10MB of data per page load, super slow, and chronically glitchy and disorganized. All that to do the same things that a plain text menu system running in a terminal at 9600bps could do better.

GBAJAM 2024 submission on itch: https://90soft90.itch.io/wreckage

Reply 30 of 47, by keenmaster486

User metadata
Rank l33t
Rank
l33t
leileilol wrote on 2025-05-28, 04:09:

Did you know he defended that awful hallucinated quake2? That's the complete polar opposite of an optimized thing you can think of.

Yeah I saw that, disappointing. It's unfortunately easy for guys like him to fall down the "it's conceptually interesting so it must have value" type of rabbit hole.

World's foremost 486 enjoyer.

Reply 31 of 47, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
gerry wrote on 2025-05-28, 10:01:
vvbee wrote on 2025-05-28, 08:52:

Obviously AI will be the one writing code going forward and your hardware will come along making that and many other things possible.

that might happen for a while, but why write code that needs to be run when the generative AI can just 'generate' a game on the fly as an interface. Then we'll never know what's happening under the hood - not for games, text, images, movies - nothing at all

If a general model can do this then we may as well be living in a simulation. But AI with prompting could train a smaller model that encapsulates a game's logic and so provides the experience. Code is by far the more resource efficient way though so probably we'll see more of that. With AI being as promising and compute hungry as it is you as a consumer wouldn't want hardware resources to be gimped now, but if they were and you were in the business of training and serving AI you'd rather people got used to slower computers so you could have more of it.

Reply 32 of 47, by Jo22

User metadata
Rank l33t++
Rank
l33t++

LLM, not "AI"..

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 33 of 47, by gerry

User metadata
Rank l33t
Rank
l33t
vvbee wrote on 2025-05-29, 07:15:

If a general model can do this then we may as well be living in a simulation. But AI with prompting could train a smaller model that encapsulates a game's logic and so provides the experience. Code is by far the more resource efficient way though so probably we'll see more of that. With AI being as promising and compute hungry as it is you as a consumer wouldn't want hardware resources to be gimped now, but if they were and you were in the business of training and serving AI you'd rather people got used to slower computers so you could have more of it.

the requirement for computation is the limiting factor here, the gap may never close as AI produces more and more advances faster as code rather than via direct output, it's difficult to predict and depends on global energy to drive ever more powerful computation or whether some quantum computing potential exists. I can see a fractional use in the shorter term though - it's happening now, so perhaps there will be a gradual closing of that gap over years until we reach a point where a line is crossed

i played that quake 2 ai generated game - it was interesting to think there is no game engine or map as such, facing a wall really did change where you were in the made up map! Like early ai videos compared to latest releases though, it may be overtaking our expectations soon enough

Reply 34 of 47, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie

There was a game engine and a map in that there's a logical continuum driving an experience. Very inefficient and hostile to tuning. I think if you're willing to project out a 100 years then real-time video generation could be a mainstream gaming platform, but that's a given for most computing fantasies. Coding turned out to be relatively easy for AI, same as natural language.

Reply 35 of 47, by gerry

User metadata
Rank l33t
Rank
l33t
vvbee wrote on 2025-05-29, 10:34:

There was a game engine and a map in that there's a logical continuum driving an experience. Very inefficient and hostile to tuning. I think if you're willing to project out a 100 years then real-time video generation could be a mainstream gaming platform, but that's a given for most computing fantasies. Coding turned out to be relatively easy for AI, same as natural language.

I suspect it'll be faster, no real difference in hardware in two years for video generation:

https://arstechnica.com/ai/2025/05/googles-wi … ut-its-crunchy/

There's a lot more to do for a 'generated' game to do so it won't be either/or but both - both compiled code and generated output, and soon enough both behind the obfuscation that is the tangled networks of ai

Reply 36 of 47, by SPBHM

User metadata
Rank Oldbie
Rank
Oldbie
marxveix wrote on 2025-05-28, 09:26:

Every new windows os has more bloatware, so much unneeded stuff included, forced to have.

this is true, but some of it is justified in terms of security because we are always connected and the threats are far more advanced also,
some is just nonsense trying to gather more data for using in advertising and weird research, and monetization,

at the same time, linux feels pretty lightweight in general, but in the end it's not significantly faster at most regular tasks and gaming on current computers

Reply 37 of 47, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

I would argue the last may not be accurate.

The devil's in the details of course, but look at the perf data from the steam community survey, regarding ROG Ally devices, since steamos became supported.

Linux is demonstrably more performant in that head to head comparison.

https://www.windowscentral.com/gaming/pc-gami … ll-as-usability

Reply 38 of 47, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
gerry wrote on 2025-05-29, 12:25:
I suspect it'll be faster, no real difference in hardware in two years for video generation: […]
Show full quote
vvbee wrote on 2025-05-29, 10:34:

There was a game engine and a map in that there's a logical continuum driving an experience. Very inefficient and hostile to tuning. I think if you're willing to project out a 100 years then real-time video generation could be a mainstream gaming platform, but that's a given for most computing fantasies. Coding turned out to be relatively easy for AI, same as natural language.

I suspect it'll be faster, no real difference in hardware in two years for video generation:

https://arstechnica.com/ai/2025/05/googles-wi … ut-its-crunchy/

There's a lot more to do for a 'generated' game to do so it won't be either/or but both - both compiled code and generated output, and soon enough both behind the obfuscation that is the tangled networks of ai

Ten seconds of uncanny valley video with a simple subject is too far from a cohesive and rewarding 50-hour gameplay experience that must be generated uniquely enough millions of times per title. Small bits of generated content will be used in games, but that's very different from AI generating gaming experiences directly, which also has existential implications.

Reply 39 of 47, by cyclone3d

User metadata
Rank l33t++
Rank
l33t++

As far as somewhat obvious, though not extremely easy to implement in C/C++ and most likely some other languages:

Unrolling loops. Doing this to tight loops that run over and over again can yield performance improvements of 25% or higher.

Replacing case statements with jump tables (even works for functions) can yield a performance increase of 20% or higher.

Of course some refactoring has to be done, but I have used both of these methods and they can help immensely even if you have to add addition checks to the code in certain instances.

Other optimizations that can also yield pretty big performance increases are:

Not using floating point math.

Using addition, subtraction, and multiplication in place of division and other methods that use division.

Bit shifting instead of other mathematical methods where possible.

Other bit twiddling tricks.

On purpose losing bits that are not needed instead of doing calculations to figure out the digits to pull from a value.

Bit packing to reduce memory footprint.

There are a ton of manual optimizations that can be done better than the compiler optimization functionality.

If you really want to get into the weeds, you can output the assembler code and then manually do more optimizations before compiling the program.

Yamaha modified setupds and drivers
Yamaha XG repository
YMF7x4 Guide
Aopen AW744L II SB-LINK