VOGONS


Reply 40 of 47, by keenmaster486

User metadata
Rank l33t
Rank
l33t

Ah, but those are microoptimizations compared to what Carmack is talking about, which is these layers upon layers of abstraction that we've built that let developers write a calculator app with 100 lines of Javascript that bloats the end result by 10000x what it could be.

World's foremost 486 enjoyer.

Reply 41 of 47, by cyclone3d

User metadata
Rank l33t++
Rank
l33t++
keenmaster486 wrote on 2025-05-30, 14:42:

Ah, but those are microoptimizations compared to what Carmack is talking about, which is these layers upon layers of abstraction that we've built that let developers write a calculator app with 100 lines of Javascript that bloats the end result by 10000x what it could be.

That slowness because of abstraction in not necessarily the languages problem, or even an abstraction problem. Those libraries that make those types of things easier have not been programmed properly IF they cause bloat or slowness.

Interpreted languages have their purpose but are generally bloaty / slow. That being said, things like multiple programmatically generated email signatures for Outlook, with data being pulled from Active Directory can be completed in a 3-4 seconds compared to the original code I was given to work off of, which took around 30-45 seconds or more.

This was done in Visual Basic Script.

I've also got a "script" that uses JavaScript, vbscript, and html that is about as fast as possible. It was made around 7 or more years ago and still worked as designed a few months ago when I tested it.

I really don't classify that one as a script as it is basically a full blown program with GUI elements.

One of the programs I worked on for fun, years and years ago, while I was in college, is a somewhat rudimentary calculator, written in C, that can do infinite sized number calculations with infinite decimal place precision. It is only limited by the amount of RAM available. No special memory saving tricks or anything like that and it isn't super fast as it does everything kind of like you would do it on paper.

No matter the result ( you can let it run for however long you want ) on division calculations that don't come out evenly, it will not need to allocate any more RAM than was originally allocated. Time for each digit of the result is just about O(1).

It originally took up a single byte per digit used in the initial equation plus a tiny bit more for the actual executable.

At one point, I had gotten everything to work with 32-bit unsigned integers but then tried to go to 64-bit unsigned integers and had trouble getting everything to work properly. The RAM saved by making those changes also helped with the speed but made certain things a lot more complicated to figure out.

A huge problem is that a large majority of coders (not worth calling them programmers) just get something working and don't even bother trying to make it work efficiently.

For all the flack that some languages or sub languages get, such as JavaScript, vbscript, and VBA for example, they can be useful and performant.

C can be about as fast as possible without going as far as programming in assembly language, but even assembly can be unoptimized if you don't know what you are doing.

Manually optimizing code better than the compilers can do, is in effect optimizing the output of the generated assembly code.

If you want ultimate speed and lowest RAM requirements, why not just go directly to writing everything in machine language?

There is a point at which more performance is not needed and the time spent making the performance better is not worth it.

You could theoretically, write a compiler that targets a very specific piece of hardware that would have perfect or close to perfect performance for said piece of hardware but it would not work or be very unoptimized for a very similar piece of hardware.

Compilers for general use do have switches to target specific types of hardware, but there is always going to be performance left on the table due to the massive number of variables in system configurations such as instruction sets, number of cores number of cache levels, size of caches, variations in interconnect speeds (and thus slight hardware delay differences even with "identical hardware"), amount and speed of RAM, RAM timings, etc.

Yamaha modified setupds and drivers
Yamaha XG repository
YMF7x4 Guide
Aopen AW744L II SB-LINK

Reply 42 of 47, by gerry

User metadata
Rank l33t
Rank
l33t
keenmaster486 wrote on 2025-05-30, 14:42:

Ah, but those are microoptimizations compared to what Carmack is talking about, which is these layers upon layers of abstraction that we've built that let developers write a calculator app with 100 lines of Javascript that bloats the end result by 10000x what it could be.

indeed, the efficiency though is that the same multiple layers that supported the calculator app also support a huge variety of other applications. But yes, so many layers, api's, libraries and more before application logic even gets to run

Reply 43 of 47, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
keenmaster486 wrote on 2025-05-30, 14:42:

Ah, but those are microoptimizations compared to what Carmack is talking about, which is these layers upon layers of abstraction that we've built that let developers write a calculator app with 100 lines of Javascript that bloats the end result by 10000x what it could be.

Writing a calculator app in assembly vs. asking Claude to write you a calculator app is a good timeline of anti abstraction of goals into technical concerns. It shovels away the technical detour so to the developer it looks like growing abstraction.

Reply 44 of 47, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

Only valid to a certain extent.

Thread safety, memory safety, correct access to resources, yes. Those are handled with proper api calls. (the underlying api functions perform these tasks as part of the operating system itself; doing things behind the OS's back is never appropriate with a modern OS.)

Calling .net, which then calls a middle layer wrapper, which calls a local surrogate for win32api, which THEN calls win32api, is what is meant here.

The whole reason for doing it is summed up with this list:

1) easy to target high level functions. Scary OS calls not needed! Save Precious Developer Time(tm) by wasting the user's forever!

2) look, we dont like having to keep old software interfaces and doing regression tests! Forcing you to target the high level function instead, with lots of calls later, lets us service those calls however we feel like. We're microsoft, and we know what's best!

Where just targetting the OS exported api gives you proper, speedy controls without a lot of bullshit.

Last edited by wierd_w on 2025-05-31, 05:23. Edited 1 time in total.

Reply 45 of 47, by zyzzle

User metadata
Rank Member
Rank
Member
cyclone3d wrote on 2025-05-30, 16:24:

One of the programs I worked on for fun, years and years ago, while I was in college, is a somewhat rudimentary calculator, written in C, that can do infinite sized number calculations with infinite decimal place precision. It is only limited by the amount of RAM available. No special memory saving tricks or anything like that and it isn't super fast as it does everything kind of like you would do it on paper.

No matter the result ( you can let it run for however long you want ) on division calculations that don't come out evenly, it will not need to allocate any more RAM than was originally allocated. Time for each digit of the result is just about O(1).

It originally took up a single byte per digit used in the initial equation plus a tiny bit more for the actual executable.

If you don't mind, do you have a binary executable of your calculator to try out? The BC.EXE for DOS compiled with DJGPP does something similar to this. It seems to use unlimited precision and decimal places. (I've calculated results with millions of digits with it). That binary is about ~ 33 kb.

Reply 46 of 47, by myne

User metadata
Rank Oldbie
Rank
Oldbie
cyclone3d wrote on 2025-05-30, 16:24:

A huge problem is that a large majority of coders (not worth calling them programmers) just get something working and don't even bother trying to make it work efficiently.

Cost benefit applies.
Look at my sig.
I could rewrite the asc converter in c, and I could make it run in a second.
But I'd have to learn a lot more about c, and in powershell it is done in under 30secs.

The cost of optimisation far, far outweighs the performance benefit of a conversion run once per leaked board.

Even a few minutes would be acceptable performance

I built:
Convert old ASUS ASC boardviews to KICAD PCB!
Re: A comprehensive guide to install and play MechWarrior 2 on new versions on Windows.
Dos+Windows 3.11+tcp+vbe_svga auto-install iso template
Script to backup Win9x\ME drivers from a working install
Re: The thing no one asked for: KICAD 440bx reference schematic

Reply 47 of 47, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

I agree that there is a big difference between "I made this to do some FOO thing, and I am the only real intended user. Its not really meant or intended to a community of users to do meaningful, repeated work with it, its meant to do a one-off task" and "Here's my newest offering, please pay me 60 US Dollars. (But I totally did not give any real effort toward polish or performance consideration, I prioritized my own convenience instead.)"

I am pretty sure that the arguments being presented are about the latter.

My raised point was that OS vendors these days dont WANT you hooking the actually performant underlying OS calls directly, because they want to change them like a teenage girl changes clothes. They want you to hook a middle abstraction layer, which the OS vendor then shims however it wants, as it changes the underlying API like a teenage girl changes clothes. (The problem here, is that the OS maker starts getting lax itself, and starts chaining its own middle layers on top of each other, in ever increasing layers of shims for shims for shims, or not completely replacing one technology with another-- See also, the complete shitshow that is "Settings app" and "Control panel" on modern windows, in which SOME functions are now done one way, while others are not, etc.)

As an application developer, being stuck holding the bag because the OS vendor is a fickle and imperious tyrant about that kind of thing, means having to live with that inefficiency, because the alternative is continually broken software, and the implication that you dont know what you are doing/dont care at all about things.

In other cases, said application developer has no choice but to target an abstracted set of interfaces to accomplish what they need done, because that thing involves interacting with a hardware device, that the OEMs out there fight for dominance over. (Like video cards, and 3D acceleration features) Dealing with these bespoke things is exactly what these abstraction libraries are all about, and instead of trying to write code for the 30+ offerings from each OEM out there, you target one set of APIs with a inimum feature level, and do the best you possibly can with that, instead.

In these cases, there isnt much the application programmer can do. Just write the most efficient program logic in their own program they can, and optimize how they do these calls.. Thats about it. They arent responsible for the shitshow the OS is doing, and realistically cant do much about it.

The sins start to crop up, when you, as an application developer, start taking extra helpings of libraries, often for things that you really dont need a library for, other than your own convenience. (eg, not actually essential, like the cases prior-- Say, a library to easily talk to a serial port, or a library to do some fancy string manipulations with easier to use primitives, The potential here is basically limitless.)

Once you start doing that, to save time for yourself, you are wasting the end user's system resources to load libraries that 98% go unused for anything in your application, but the memory is consumed to load the whole library anyway, because thats how this works-- and all just because you didnt want to actually write a string manipulation function yourself.

When enough software does that, you end up with the kinds of bloat our old bastards here (myself included) find disfavorable.

If you are making a program for just yourself, for a single one-off thing, then sure-- you are perfectly legit in not giving two shits, as long as it works. You are the only one who has to use that software anyway. Perfectly OK.

Its when you try to sell it, or your software is *THE* solution for a problem out there, that things get bad. If you sell your software, you really REALLY should be more considerate of the end user's system, and its resources. You should make every effort to make your software into a good guest, and not a karen demanding to see the manager, with elevated privileges.

That's my view on the matter anyway.