Reply 40 of 47, by cyclone3d
- Rank
- l33t++
keenmaster486 wrote on 2025-05-30, 14:42:Ah, but those are microoptimizations compared to what Carmack is talking about, which is these layers upon layers of abstraction that we've built that let developers write a calculator app with 100 lines of Javascript that bloats the end result by 10000x what it could be.
That slowness because of abstraction in not necessarily the languages problem, or even an abstraction problem. Those libraries that make those types of things easier have not been programmed properly IF they cause bloat or slowness.
Interpreted languages have their purpose but are generally bloaty / slow. That being said, things like multiple programmatically generated email signatures for Outlook, with data being pulled from Active Directory can be completed in a 3-4 seconds compared to the original code I was given to work off of, which took around 30-45 seconds or more.
This was done in Visual Basic Script.
I've also got a "script" that uses JavaScript, vbscript, and html that is about as fast as possible. It was made around 7 or more years ago and still worked as designed a few months ago when I tested it.
I really don't classify that one as a script as it is basically a full blown program with GUI elements.
One of the programs I worked on for fun, years and years ago, while I was in college, is a somewhat rudimentary calculator, written in C, that can do infinite sized number calculations with infinite decimal place precision. It is only limited by the amount of RAM available. No special memory saving tricks or anything like that and it isn't super fast as it does everything kind of like you would do it on paper.
No matter the result ( you can let it run for however long you want ) on division calculations that don't come out evenly, it will not need to allocate any more RAM than was originally allocated. Time for each digit of the result is just about O(1).
It originally took up a single byte per digit used in the initial equation plus a tiny bit more for the actual executable.
At one point, I had gotten everything to work with 32-bit unsigned integers but then tried to go to 64-bit unsigned integers and had trouble getting everything to work properly. The RAM saved by making those changes also helped with the speed but made certain things a lot more complicated to figure out.
A huge problem is that a large majority of coders (not worth calling them programmers) just get something working and don't even bother trying to make it work efficiently.
For all the flack that some languages or sub languages get, such as JavaScript, vbscript, and VBA for example, they can be useful and performant.
C can be about as fast as possible without going as far as programming in assembly language, but even assembly can be unoptimized if you don't know what you are doing.
Manually optimizing code better than the compilers can do, is in effect optimizing the output of the generated assembly code.
If you want ultimate speed and lowest RAM requirements, why not just go directly to writing everything in machine language?
There is a point at which more performance is not needed and the time spent making the performance better is not worth it.
You could theoretically, write a compiler that targets a very specific piece of hardware that would have perfect or close to perfect performance for said piece of hardware but it would not work or be very unoptimized for a very similar piece of hardware.
Compilers for general use do have switches to target specific types of hardware, but there is always going to be performance left on the table due to the massive number of variables in system configurations such as instruction sets, number of cores number of cache levels, size of caches, variations in interconnect speeds (and thus slight hardware delay differences even with "identical hardware"), amount and speed of RAM, RAM timings, etc.