VOGONS


First post, by John1985

User metadata
Rank Newbie
Rank
Newbie

When you develop a program, it uses some amount of CPU, RAM and accesses hard disk frequently.
These are normal phenomena in programming, thus nobody worries about these issues.
But in dos or embedded programming, this is very important, cause it can decide the entire performance of the program.
So what is the bottleneck in dos programming?
When I do some calculations, CPU usage increases.
When I create a lot of class objects, RAM usage increases.
When I read and write from disk, the number of IO access increases.
Ofc, these are all very important, but I wanna know which one is the most vital and critical in dos field.

Reply 1 of 25, by GigAHerZ

User metadata
Rank Oldbie
Rank
Oldbie

Bottleneck depends on what your program does. Every program has its bottlenecks in different places...

"640K ought to be enough for anybody." - And i intend to get every last bit out of it even after loading every damn driver!

Reply 2 of 25, by Oetker

User metadata
Rank Oldbie
Rank
Oldbie

Bottleneck depends on the program/hardware combination, not on the OS.
However
-Unless you're using a toolsuite that integrates a DOS Extender, accessing more than 1MB of memory is difficult.
-Hard disk access will generally be slow as there was usually no DMA support. Though disks were slow to begin with.
-Writing to video memory is slow on ISA based systems

Reply 3 of 25, by Gmlb256

User metadata
Rank Oldbie
Rank
Oldbie

Programming on DOS involves communicating directly with the hardware and the closest thing to an API are the software interrupts. Optimizing for performance at the time was to program in assembly language as compilers weren't good with optimization.

Oetker wrote on 2021-09-03, 13:28:

-Hard disk access will generally be slow as there was usually no DMA support. Though disks were slow to begin with.

There are some drivers that enables DMA support on DOS but that depends of the hard disk/CD-ROM controller card and/or chipset on the motherboard.

Reply 4 of 25, by keenmaster486

User metadata
Rank l33t
Rank
l33t

Yeah it just depends on what you're doing and the specs of your system, like everything else.

However, compared to a modern system, your hard disk access will be slower proportionally to the rest (I think? due to SSDs being common nowadays) so if you're doing something disk-intensive, that will bottleneck first.

I flermmed the plootash just like you asked.
World's foremost 486 enjoyer.

Reply 5 of 25, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Yes, but on the other hand.. DOS doesn't use swap files, so there will be less HDD usage than on Windows/Linux.
Thus, HDD usage will be more "linear" on DOS - the write/read heads don't have to jump from track to track (as it's normally the case in a multi-tasking OS) .
Provided, that the HDD is defragmented.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 6 of 25, by BitWrangler

User metadata
Rank l33t
Rank
l33t

If targetting 386/486 you can probably only rely on about 500 kiloBytes per sec disk transfer, DX4 Class machines used to tend to have 1MB/sec then into pentiums it was 2MB sec and up. If you are hammering the disk, but the CPU is meanwhile figuratively twiddling it's thumbs waiting, consider compression.

2017: Basement full of ancient PC stuff, starting to go through it. 2021: Still starting, heh, many setbacks. So what's this BitWrangler guy's deal ??? >>> Taming the pile, specs to target?

Reply 7 of 25, by retardware

User metadata
Rank Oldbie
Rank
Oldbie

RAM, RAM and RAM.
Many compilers had to heavily use temp files, which bogged down building.

To get your software run in 640k, you sometimes even had to do overlay linking, a thing nowadays probably few people know of anymore.
(FYI: you can structure your software into several overlays that get loaded on demand. You had to do this carefully, to avoid too much swapping between different overlaid modules.)

And, don´t even think of compression to speed up disk transfers. CPUs were way too slow for that. The 80kB/sec transfer rate of PC/XT MFM disks was way faster than these CPUs could compress.

Reply 8 of 25, by Jo22

User metadata
Rank l33t++
Rank
l33t++
retardware wrote on 2021-09-03, 16:41:
RAM, RAM and RAM. Many compilers had to heavily use temp files, which bogged down building. […]
Show full quote

RAM, RAM and RAM.
Many compilers had to heavily use temp files, which bogged down building.

To get your software run in 640k, you sometimes even had to do overlay linking, a thing nowadays probably few people know of anymore.
(FYI: you can structure your software into several overlays that get loaded on demand. You had to do this carefully, to avoid too much swapping between different overlaid modules.)

Right! I remember that ram disk drivers were very popular among developers because of this!
The overlay technique was common between 1986 and 1990 (roughly) , I think.
Applications using them could be recognized by the use of *.OVL files! 😁

retardware wrote on 2021-09-03, 16:41:

And, don´t even think of compression to speed up disk transfers. CPUs were way too slow for that. The 80kB/sec transfer rate of PC/XT MFM disks was way faster than these CPUs could compress.

DoubleSpace works fine on my XT clone @4,77MHz, at least.
There was no noticeable slowdown, as far as I can tell.
Even before the upgrade to the NEC V20.

Here's a video of the installation part.
Unfortunately, the battery of the cam corder ran out of energy during filming. 🙁

https://www.youtube.com/watch?v=5XxOtHodSBU

That being said, I can't speak for other, earlier disk compressors from the 1980s.
Double Disk was the THE disk compression utility at the time.
A competitor, Stacker, even sold an extra ISA card for accelerated compression/decompression.

Attachments

  • stac-card.jpg
    Filename
    stac-card.jpg
    File size
    534.6 KiB
    Views
    759 views
    File comment
    https://virtuallyfun.com/wordpress/2011/07/12/stac-electronics-stacker-for-os2fd/
    File license
    Fair use/fair dealing exception

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 9 of 25, by retardware

User metadata
Rank Oldbie
Rank
Oldbie

That card was crazy...

But, disk compression is a good thing, indeed 😀
Nowadays, all my (modern) computers use ZFS filesystem with LZ4 compression:

# zfs get compression zpool
NAME PROPERTY VALUE SOURCE
zpool compression lz4 local
# zfs get compressratio zpool
NAME PROPERTY VALUE SOURCE
zpool compressratio 1.14x -
#

So, the stacker idea still lives 😀

But, back to DOS topic...
Doublespace/Drivespace was a TSR with iirc somewhere between 60 and 70kB... OUCH... so I preferred compression with pkzip and lharc 😀

Reply 10 of 25, by Caluser2000

User metadata
Rank l33t
Rank
l33t

I would image the biggest bottleneck would be the person writing the code.....😉

There's a glitch in the matrix.
A founding member of the 286 appreciation society.
Apparently 32-bit is dead and nobody likes P4s.
Of course, as always, I'm open to correction...😉

Reply 11 of 25, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Quick Update. I ran a HDD benchmark on my PC/XT compatible @4,77MHz (V20) with DoubleSpace loaded..
The HDD is an antique 8" MFM/RLL Fixed-Disk with 20MB capacity..

CheckIt v4 reports.. 130KB/s
System Information (SI) reports.. 140KB/s

Caluser2000 wrote on 2021-09-03, 19:29:

I would image the biggest bottleneck would be the person writing the code.....😉

Is that a pun.. Or a joke? I don't get that joke. 🙁

Edit: Found this. https://lifehacker.com/how-to-avoid-being-a-h … work-1621627003

But maybe the joke was about programmers with drinking problems, which were common ?
Phil Katz, for example, the programmer of PKZip, drank a bit too and was found lifeless in 2000. 😢

http://www.bbsdocumentary.com/library/CONTROV … S/SEA/pkzip.htm

Edit: Found one more reference to alcohol/bottles/programmers.
https://www.explainxkcd.com/wiki/index.php/323:_Ballmer_Peak

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 12 of 25, by Stiletto

User metadata
Rank l33t++
Rank
l33t++
Jo22 wrote on 2021-09-05, 03:57:
Caluser2000 wrote on 2021-09-03, 19:29:

I would image the biggest bottleneck would be the person writing the code.....😉

Is that a pun.. Or a joke? I don't get that joke. 🙁

Edit: Found this. https://lifehacker.com/how-to-avoid-being-a-h … work-1621627003

I am fairly certain this is more or less the definition that Caluser2000 was going for.

Or more of a joke, based on the thread title:

The main bottleneck in the creation of a DOS program would be whether or not you actually have a programmer to work on the DOS program.

The second bottleneck is then if the DOS programmer that you have found is any good at programming or not...

"I see a little silhouette-o of a man, Scaramouche, Scaramouche, will you
do the Fandango!" - Queen

Stiletto

Reply 13 of 25, by Caluser2000

User metadata
Rank l33t
Rank
l33t
Stiletto wrote on 2021-09-05, 19:52:
I am fairly certain this is more or less the definition that Caluser2000 was going for. […]
Show full quote
Jo22 wrote on 2021-09-05, 03:57:
Caluser2000 wrote on 2021-09-03, 19:29:

I would image the biggest bottleneck would be the person writing the code.....😉

Is that a pun.. Or a joke? I don't get that joke. 🙁

Edit: Found this. https://lifehacker.com/how-to-avoid-being-a-h … work-1621627003

I am fairly certain this is more or less the definition that Caluser2000 was going for.

Or more of a joke, based on the thread title:

The main bottleneck in the creation of a DOS program would be whether or not you actually have a programmer to work on the DOS program.

The second bottleneck is then if the DOS programmer that you have found is any good at programming or not...

You got it....😉

And more........

There's a glitch in the matrix.
A founding member of the 286 appreciation society.
Apparently 32-bit is dead and nobody likes P4s.
Of course, as always, I'm open to correction...😉

Reply 14 of 25, by Jo22

User metadata
Rank l33t++
Rank
l33t++
Caluser2000 wrote on 2021-09-05, 19:55:
Stiletto wrote on 2021-09-05, 19:52:
I am fairly certain this is more or less the definition that Caluser2000 was going for. […]
Show full quote
Jo22 wrote on 2021-09-05, 03:57:

Is that a pun.. Or a joke? I don't get that joke. 🙁

Edit: Found this. https://lifehacker.com/how-to-avoid-being-a-h … work-1621627003

I am fairly certain this is more or less the definition that Caluser2000 was going for.

Or more of a joke, based on the thread title:

The main bottleneck in the creation of a DOS program would be whether or not you actually have a programmer to work on the DOS program.

The second bottleneck is then if the DOS programmer that you have found is any good at programming or not...

You got it....😉

And more........

Ah, I see. Thanks you two. 🙂

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 15 of 25, by John1985

User metadata
Rank Newbie
Rank
Newbie

Thanks for your answers! I really appreciate them all.
But, I am still now sure what is really assigned to my case....
Let me explain my situation.
As DOS is really slow and uncomfortable to work, I am coding and make build for both Ubuntu and DOS using DJGPP.
My ubuntu vm has 2 cpus and 2GB ram, but anyway, it always uses only 10.2% of ram while running.
According to this percentage, the program must use about 240MB in DOS, am I right?
Though I increase the memory size to 300MB in DOSbox, it crashes all the time, and so real dos does.
Could you kindly explain about this?
Actually, that is a simple 2D graphic game, and it is being developed by allegro.
What I am wondering about is that many DOS games sizes over 100MB and support 3D rendering, they work well in low-spec Dosbox!

Reply 16 of 25, by gerry

User metadata
Rank Oldbie
Rank
Oldbie

might it just be that your program contains bugs?

if you are using djgpp and allegro one test you can try is to download another allegro/djgpp project that is known to work well and compile that on the vm

that may help understand if there is an environmental problem or not and go from there

Reply 17 of 25, by Gmlb256

User metadata
Rank Oldbie
Rank
Oldbie
John1985 wrote on 2021-09-06, 23:11:
Thanks for your answers! I really appreciate them all. But, I am still now sure what is really assigned to my case.... Let me ex […]
Show full quote

Thanks for your answers! I really appreciate them all.
But, I am still now sure what is really assigned to my case....
Let me explain my situation.
As DOS is really slow and uncomfortable to work, I am coding and make build for both Ubuntu and DOS using DJGPP.
My ubuntu vm has 2 cpus and 2GB ram, but anyway, it always uses only 10.2% of ram while running.
According to this percentage, the program must use about 240MB in DOS, am I right?
Though I increase the memory size to 300MB in DOSbox, it crashes all the time, and so real dos does.
Could you kindly explain about this?
Actually, that is a simple 2D graphic game, and it is being developed by allegro.
What I am wondering about is that many DOS games sizes over 100MB and support 3D rendering, they work well in low-spec Dosbox!

I prefer using the OpenWatcom compiler for DOS instead of DJGPP since the latter one is bloated (especially with later versions) and doesn't compile 16-bit real-mode executables. DOS executables that uses that large amount of memory means inefficiency, most games around the DOS era consumes less than 16MB RAM (including the ones that uses 3D rendering).

I also don't recommend testing the actual performance on DOSBox since it doesn't emulate an actual CPU there and it hides problems that could appear on real machines. It's only useful for initial testing if developing from DOSBox.

Reply 18 of 25, by ViTi95

User metadata
Rank Member
Rank
Member

That's too much RAM usage for a DOS game, think that games like DOOM just runs on 4MB. Reduce whenever possible the data types to smaller ones, and be very careful with dynamic memory allocation (free data whenever possible, and it's better to use static resources). And very important, memory leaks are very problematic in DOS, the kernel doesn't control the memory acceses at all (can lead to hdd corruption and things like that). Also happens that 386 and older processors suffer from very slow RAM memory transfers, the less you use it, the better.

There is also another big problem in DOS programming. The main bottleneck in DOS programming (386/486) is the slow transfer speed of the ISA bus. For example, most ISA VGA video cards (even the 16-bit ones) can't update a 320x200 (256 colors) whole screen more than 30 frames per second. EGA cards are even worse, all of them are 8-bit and can't update at reasonable speeds. DosBox doesn't emulate this bottleneck, so it's better to test the executables with PCem.

DOS programming requires to be very clever if you wan't things to go fast on older machines. DOS itself isn't the problem, the problem is the hardware you're dealing with.

Reply 19 of 25, by Jo22

User metadata
Rank l33t++
Rank
l33t++
ViTi95 wrote on 2021-09-16, 08:28:

Also happens that 386 and older processors suffer from very slow RAM memory transfers, the less you use it, the better.

If EMM386 is used, for sure. It utilizes both the Memory Managment Unit (MMU) and the Virtual 8086 Mode (VM86/V86), a sub mode of 80386 Protected Virtual Address Mode .

Unfortunately, using V86 in this way causes a slow-down on a 80386 processor.
The issue was fixed in late 80486 processors with the advent of VME - the one the AMD Ryzen messed up years ago.
Edit: QEMM7+ and 386MAX? supported VME, which was considered a valued feature of the new 586 (Pentium) CPU.

Edit: In pure Real Address Mode or Protected Virtual Address Mode, without V86, the performance is better.

So as a workaround, using hardware solutions for UMB and EMS make sense on 486 systems and before. At least on plain DOS.
If no dedicated hardware is available (UMB/EMS boards on bus), chipsets can provide these.
They will likely also cause little to no bottleneck, by comparison.

Edit: Utilities like UMBPCI will provide some sort of chipset UMBs, through the use of shadow memory intended for PCI bus.
Unfortunately, support on early 486 era VIP motherboards is flaky.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//