I own Turbo C++, and somewhere (I'm not sure where) might still have the manuals.
Back in the day I didn't get very far at age 14 learning C++, but thought I'd have a go at retro coding.
Some of the examples mention "hands on C++" which I don't remember - but then it's been decades, I have no idea what the cover to this would even look like - yellow and red like the manuals ? On the other hand, Claude.ai tells me it came with "some edition" on the final hand, I'm not sure I trust Claude on this.
Btw, C++ is just a language while retro DOS programming involves learning how to directly communicate with hardware, BIOS and DOS system calls.
Hanging on to the book and learning C++ STD and Borland libs in detail probably won't get you where you want to go.
Grasp the basics of the environment and language and then move on to material on how to program stuff "directly" under DOS.
For example Borland has BGI which is graphics library, but it is not performant and not a good choice if you want to write a game. While it may seem easy to just use it in your exploration of C++ stuff, it is certainly not what you want to use in the end.
Oh, thanks - I guess Claude was right and these really did only come with some versions, as I never saw this in my copy bought in the UK.
Btw, C++ is just a language while retro DOS programming involves learning how to directly communicate with hardware, BIOS and DOS system calls.
Hanging on to the book and learning C++ STD and Borland libs in detail probably won't get you where you want to go.
Grasp the basics of the environment and language and then move on to material on how to program stuff "directly" under DOS.
Yep, don't worry - I am aware of the C++ is prehistoric in this. I did do a bunch of TurboPascal and QuickBasic programming at the time and am just using this as a starting point, as these kinds of environments are a little familiar. I always found TUIs interesting, and there is TurboVision.
Once I get a little further, I will probably give OpenWatcom a go, I'm not sure if there are any nice DOS IDEs for that (again I know this isn't entirely necessary, I'm developing in DOSEMU2).
I guess RHIDE is an option - I'm not sure how it stacks up against other TUI editors of the time, whose best feature for me back then was the integrated help and documentation.
At some point I'll get a more modern dev environment setup, with DOSEMU2 just for running code, as far as I can see DJGPP only goes up to GCC 2.5, OpenWatcom is also fairly old - I guess there is way of compiling modern C++ down to DOS, so at that point it looks like Zig might be the most modern language I can use and compile to (32 bit) DOS.
Well you can go two ways - VESA or old VGA programming.
VGA stuff is intricate, I can recommend Michael Abrash's book which covers it in great detail. VESA was designed to be the opposite, easy to use.
If you target a late DOS thing with modern languages then VESA would be a way to go. It sets up a mode and a linear framebuffer in minimal code.
Technically, you could also build SDL1 targeting VESA on DJGPP and just use that as a high level api.
Again depends on what you want to do exactly - a DOS/extender can run from a 386 to Pentium 4 and the choice of language and programming techniques change.
Is VESA programming strictly a protected mode thing? Or can it be done in real (or unreal mode) too? It's just out of curiosity, as I think realistically speaking going for VESA modes (640x480 256 colors and above) implies machines that are stronger than XTs and 286s.
I think I'm misunderstanding you here. C++ is actually a bit too modern for DOS. Pascal and C were the prevalent high-level languages of the late 80s and early to mid 90s.
I guess RHIDE is an option - I'm not sure how it stacks up against other TUI editors of the time, whose best feature for me back then was the integrated help and documentation.
I think it has a texinfo browser so you can view GCC documentation in it. I'm not sure what other forms of help it supports - it might support Borland's help files. I'd be surprised if anyone has made a converter from the Watcom documentation format into anything you can view in RHIDE.
At some point I'll get a more modern dev environment setup, with DOSEMU2 just for running code
There's actually someone who has been keeping that up-to-date for a while now. https://www.delorie.com/pub/djgpp/current/v2gnu/00_index.txt shows GCC 14.2.0! I think I saw that there were some issues with long file names that can cause problems in some of these more modern versions though.
OpenWatcom is also fairly old
There's the version 2 fork which I think was reasonably active, but I think they haven't updated to more modern C++ variants.
C++ is actually a bit too modern for DOS. Pascal and C were the prevalent high-level languages of the late 80s and early to mid 90s.
Turbo C++ came out in 1990, and they released a few more versions of it before it morphed into being mostly about developing Windows software, so I guess some people used it.
Basically modern languages and DOS do not go hand in hand. Why should they, DOS is a single user runtime, not an operating system of in the sense of today.
Asm "Hello World" as COM file is 20 bytes large, same program is 5 KB in Rust. There is stuff for Go and other currently popular languages but I'd avoid this.
It's a curiosity to build DOS executables using modern languages, fun but useless. It only brings you bloat. You don't want a strong typed language in DOS.
As for using classic languages you probably can't miss with using latest DJGPP which comes with 2020 GNU compiler collection (9.3).
You still have to keep in mind your target platform which is basically any PC from 386 to up, and adapt your programming style to it.
Charm of DOS programming isn't DOS stuff but the fact you were resource constrained.
Or can it be done in real (or unreal mode) too? It's just out of curiosity, as I think realistically speaking going for VESA modes (640x480 256 colors and above) implies machines that are stronger than XTs and 286s.
Yes (both), and yes (too large bpp too many pixels to push...)
zb10948wrote on 2025-03-28, 13:22:Basically modern languages and DOS do not go hand in hand. Why should they, DOS is a single user runtime, not an operating syste […] Show full quote
Basically modern languages and DOS do not go hand in hand. Why should they, DOS is a single user runtime, not an operating system of in the sense of today.
Asm "Hello World" as COM file is 20 bytes large, same program is 5 KB in Rust. There is stuff for Go and other currently popular languages but I'd avoid this.
It's a curiosity to build DOS executables using modern languages, fun but useless. It only brings you bloat. You don't want a strong typed language in DOS.
As for using classic languages you probably can't miss with using latest DJGPP which comes with 2020 GNU compiler collection (9.3).
You still have to keep in mind your target platform which is basically any PC from 386 to up, and adapt your programming style to it.
Charm of DOS programming isn't DOS stuff but the fact you were resource constrained.
one of the advantages of modern toolsets is that they are able to use huge modern libraries and ..... oh! 😀
I'm fine to use an old dev tool for an old environment, by the time of the mid 90's development tools for DOS were pretty much done - and the final versions of those classic dev tools supporting DOS (like BP7,BC3.1, MSC 7, MASM6 and so on) are great for almost anything realistic in DOS anyway, with some support later from DOS targeting tools like djgpp.
Part of the fun of DOS programming is that the developer is not layering over APIs and libraries and objects at such arms length from the hardware as is often the case with modern tools
I disagree; using modern tooling to develop for older systems is far, far more productive.
A modern cross compiler toolchain for Linux, Mac OS or Windows, coupled with a syntax/code highlighter, command completion with linting tools and build/link/test automation beats trying to run native dgjpp on Dos, hands down!
There are plenty of modern tools specifically written to target retro systems, without all of the modern library bloat.
I like gcc/dgjpp, but that's because I'm a Unix guy at heart, and it makes it incredibly easy to lift and shift from a posix-type target to protected mode Dos, but you also have modern versions of Open Watcom, newer assemblers, etc.
I disagree; using modern tooling to develop for older systems is far, far more productive.
A modern cross compiler toolchain for Linux, Mac OS or Windows, coupled with a syntax/code highlighter, command completion with linting tools and build/link/test automation beats trying to run native dgjpp on Dos, hands down!
I like all using sorts of old software and hardware - hence why I'm on this forum and for example keep participating in threads about Turbo C++ - but yes, since my full-time job is writing software using modern development environments, I miss the last 30 years of innovations in that area when I use some historical tools!
I disagree; using modern tooling to develop for older systems is far, far more productive.
I didn't say modern tooling but modern languages and modern versions of classic languages.
The most productive way is to set up cross compilation or compile inside emulator/Dosbox while using your modern pc for IDE.
I'm Unix guy too,
In my opinion if you want to realize a DOS project in the optimum way you won't actually use some DOS machine as development workstation.
But that kind of kills the magic - sometimes you want to engage in some project so you can spend time with some old machine.
I like all using sorts of old software and hardware - hence why I'm on this forum and for example keep participating in threads about Turbo C++ - but yes, since my full-time job is writing software using modern development environments, I miss the last 30 years of innovations in that area when I use some historical tools!
It's all fine and dandy but without API layer of the OS you lose all the ability to separate yourself from the sideeffects your code will be producing.
This is what 99.9999% of modern programmers rely on and what their development environments deliver. You expect a debugger to cycle through program execution and not one program instruction crashing the entire computer.
If one knows what kernel development is all about in the enviroment setup and methods used in the development process to hunt and debug the issues, if one has worked with non trivially debuggable stuff like microcontrollers, then he's ready for DOS stuff.
Basically modern languages and DOS do not go hand in hand. Why should they, DOS is a single user runtime, not an operating system of in the sense of today.
Asm "Hello World" as COM file is 20 bytes large, same program is 5 KB in Rust. There is stuff for Go and other currently popular languages but I'd avoid this.
There are high level languages well suited to DOS:
I write almost all of my DOS software with my own Micro-C (8086/PC edition)
It's nowhere near 5K - it's still much bigger than could be done in assembler,
mainly because of the use of printf() which is a very powerful function in the
C library which can produce formatted output with all kinds of useful format
options. (a program to print a signed decimal number in a fixed field width
with(or without) leading zeros would be no bigger.
There's also setup, argument parsing, buffered I/O functions and other
things essential for any reasonably functional DOS program.
If you don't use printf(). you can simplify it to:
1/2 the size, but you still have setup, argument parsing, buffered I/O
etc.
Yes, I could write a .ASM program to send the string "Hello world" directly
to the PC console in only a few bytes.... But to do anything "real" or at all
efficient, I'm going to have to generate more code.
To put that in perspective, the Micro-C compiler itself (a fairly complete
implementaion of a C language compiler for DOS):
DaveDDS for the love of god I didn't mean C in there.
Do you think C is dubbed as "modern language" in 2025?
I still use it daily... (but I've always been kindof anal about knowing exactly
what the code I'm developing is doing)
There are lots of things that make no sense under DOS (or any simple executive) but
in any discussion - I take some direction from the topic subject.
And I was responding to a statement about how much assembler is better than a
high-level language (I know of very little development in ASM on "modern" OSs)
C is not going anywhere, it is not being replaced, it is widely used, it has been widely used. Period.
When I made comparison against assembler, I compared Rust. Rust is a modern language with a fat runtime. It cannot produce even a .COM file below few kilobytes in size. That is the point. Nowhere have I mentioned C/C++ in this context
It amazes me sometimes how much the computing industry has changed.
I got into it in the 70s and made a career out of producing tools to
help in the development of embedded systems.
A "big" one was an 8051 with a whopping 4K of ROM and 128bytes of RAM.
(later it was made MASSIVE in the 8052 - 8k rom / 256bytes ram)
I've worked on processors as tiny as 128-256 bytes of ROM and 0-16 bytes
or RAM (some so small I never bothered porting my compiler to them).
I did most of my early work in .ASM and later specifically designed Micro-C
to target very small systems.
It still astounds me how many "good" modern programmers don't even know
what an assembler is - let alone those who don't know about: instructions,
opcodes or registers.
---
I still write a LOT of stuff under DOS (which easily ports to Winblows and
Linux via DVM - DavesVirtualMachie). I almost always use TINY (64k code+data)
or in a few cases SMALL (64k code + 64kdata - prime example being Micro-C as
it maintains a fair number/size of internal tables) - I could count on one
hand (with fingers left over) the number of times I've had to go bigger.
And this is NOT all simple stuff, I've written LOTS over the years, here are
a few of the better known things I've done:
1ImageDisk (TINY) 2Micro-C (SMALL) 3DDLINK (TINY) 4ARMOS -MultTasking ARM OS - no memory models, but not big
Back in the day, I think Bill Gates was actually fairly accurate when he said
"640k ought to be enough".
---
Back to the .ASM vs HighLevel discussion --- so RUST can display a simple
line of text in 5k - impressive.
Here's a slightly bigger .ASM I wrote (I think it was early 80s).
An 8086 machine language monitor, which includes things like a full
8086 disassembler, single-step, breakpoints, various memory/register/IO
edit/display functions, ability to download code in Intel or Motorola
format HEX downloads records ... and more:
1MON86 Commands: 2 3B <num> <address> - Set breakpoint 4CR <reg> <value> - Change register 5DB - Display breakpoints 6DI <start>,<end> - Disassemble memory 7DM <start>,<end> - Dump memory 8DR - Dump registers 9E <address> - Edit memory 10FM <start>,<end> <value> - Fill memory 11G [address] - Go (Begin execution) 12I <port> - Input from I/O port 13L - Load from host 14MM <start>,<end> <dest> - Move memory 15O <port> <data> - Output to I/O port 16Q - Terminate MON86 17S - Step one instruction 18+/- <value> <value> - Hex arithmetic
This was aimed at embedded 8086 systems, but "just for fun" I built it as a
.COM which runs under DOS:
188-04-26 10:43a 4,421 MON86.COM
And just to be clear, MON86.COM is NOT compressed in any way (and
includes the above command help text within it)
But in todays world of at least 4 (more likely 8, 16 or 32+) gigs of system
memory, terabyte hard drives - and GHz processors - why should anyone
care about efficiency?
(Systems today are orders-of-magnitude faster than 10-20 years ago,
but you almost never really see it ...)
When I made comparison against assembler, I compared Rust. Rust is a modern language with a fat runtime. It cannot produce even a .COM file below few kilobytes in size. That is the point. Nowhere have I mentioned C/C++ in this context
I suppose a general version is HLL vs hardware specific. Rust is impressive, but so are c#, java, swift, go and so on. There are more languages and toolsets than ever and they look alike so much that in the end its the environment and associated tools and libraries that matter more when choosing.
When we pick assembly language to show how small an executable can be its often that way because there is hardly anything to deal with unusual entry/exit, concurrency or other housekeeping events and no invocation of a library function designed to robustly deal with various use cases even though only one or two are actually executed in the code. For highly specialised cases its great to show how we can sculpt down the functionality but in general cases the extra kb/mb produced in the executable by a HLL compiler isnt just undefined bloat as such, its often functional (just not actually used, and i agree sometimes excessive)
It still astounds me how many "good" modern programmers don't even know
what an assembler is - let alone those who don't know about: instructions,
opcodes or registers.
I suppose its specialisation. it isn't needed to build a storefront app for online shopping, in fact very little is, its often just joining modules together. In data development too its all about abstract tool use. Even games are all unity and scripts. You have to dig through various layers of tools, libraries and APIs before you get to something that starts using concepts familiar to assembly