VOGONS


Programming on the Pocket 8088

Topic actions

First post, by RetroPCCupboard

User metadata
Rank Oldbie
Rank
Oldbie

Hi,
I own the Pocket 8088. It came with a NEC V30 but I swapped it for an AMD 8088, as I wanted a close to IBM XT experience on it.

Anyway, I thought I would try some programming on it to see what software dev experience/performance is like on a 8088 running at 4.77mhz.

I wrote a simple hello world program in each language. All it did was write "Hello World" to the screen and then quit.

Time to compile for each langauge:

Borland Turbo Basic: 2 seconds
Borland Turbo C 2.0: 8 seconds
Borland Turbo C++ 1.0: 1 minute 27 seconds
Borland Turbo Assembler 2.0: 5 seconds

Resulting EXE file sizes:

Tubo Basic: 34,736 bytes
Turbo C: 9,499 bytes
Turbo C++: 32,841 bytes
Turbo Assembler: 543 bytes

So, as you see, theres quite a trade off there in file size for using a higher level language.

At the moment I am finding the keyboard to be fine for programming on. C++ is clearly far too slow to consider using on this device. The others seem reasonable.

I havent compared performance yet. What algorithm or computation would you use to compare performance?

Reply 1 of 26, by Yoghoo

User metadata
Rank Member
Rank
Member

I'm surprised that Turbo C creates such a "big" file. For reference Turbo Pascal 7.0 creates a 2,192 bytes file for the same "Hello World" program. Don't have a 4.77MHz pc so can't give a number for the compile part btw. But from experience it is very fast compared to other compilers.

What do you mean by compare algorithm or computation? The runtime of the executable or the compile itself?

Reply 2 of 26, by RetroPCCupboard

User metadata
Rank Oldbie
Rank
Oldbie
Yoghoo wrote on 2025-08-04, 10:40:

What do you mean by compare algorithm or computation? The runtime of the executable or the compile itself?

My assumption is that most of the compile time is caused by the compiler adding in standard language functionality (even if I am not using them). I only included iostream.h in the CPP and stdio.h for C. But would be intetesting to see how long a more complex program takes.

I am also interested in execution performance. But I expect, if the compiler is worth anything, it would do a better job than me handcrafting Assembly. I am only just learning Assembly now.

Reply 3 of 26, by Jo22

User metadata
Rank l33t++
Rank
l33t++

I own the Pocket 8088. It came with a NEC V30 but I swapped it for an AMD 8088, as I wanted a close to IBM XT experience on it.

Sure. But dude, the V20/V30 was a period-correct upgrade from mid-80s.
Most sane XT users not only used V20 upgrade, but accelerator cards as soon as possible.
The i8088 at 4,77 MHz was horrible, below the performance of an C64's processor.
No one in his/her right mind kept the IBM PC at factory configuration, if an upgrade was somehow feasible and if the software could handle it.
Makers of MS-DOS compatibles used an full 8086 not seldomly, which was at least on par with a C64's 6510. 😉

Edit: No offense, I understand the purpose. It's legit, too.
But the whole amount of IBM PC™ period-correctnes nowadays is a bit disturbing, I think.
It's worse enough that emulator writers aim for cycle-exact timings of the i8088.
That chip wasn't meant to exist in first place. The i8086 was the real thing, rather. And runs at usable speeds.

Edit: Please never mind, this was merely meant as a historical anecdote. 🙂

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 4 of 26, by RetroPCCupboard

User metadata
Rank Oldbie
Rank
Oldbie
Jo22 wrote on 2025-08-04, 11:25:
Sure. But dude, the V20/V30 was a period-correct upgrade from mid-80s. Most sane XT users not only used V20 upgrade, but acceler […]
Show full quote

Sure. But dude, the V20/V30 was a period-correct upgrade from mid-80s.
Most sane XT users not only used V20 upgrade, but accelerator cards as soon as possible.
The i8088 at 4,77 MHz was horrible, below the performance of an C64's processor.
No one in his/her right mind kept the IBM PC at factory configuration, if an upgrade was somehow feasible and if the software could handle it.
Makers of MS-DOS compatibles used an full 8086 not seldomly, which was at least on par with a C64's 6510. 😉

Edit: No offense, I understand the purpose. It's legit, too.
But the whole amount of IBM PC™ period-correctnes nowadays is a bit disturbing, I think.
It's worse enough that emulator writers aim for cycle-exact timings of the i8088.
That chip wasn't meant to exist in first place. The i8086 was the real thing, rather. And runs at usable speeds.

Yes, but I have faster PCs if I want them. The V30 was too fast for some XT-class games that I tried.

My pupose for owning this device is that I want to fully understand it. Poke about in memory, look at DOS inner workings. Program it in assembly.

I want to see what I can make this hardware do when in its slowest form, but make it scalable up to the default V30 @10mhz form. I would like to make a game of some sort. But havent decided what yet. I think programming in ASM will be too much work, so was thinking I'd mostly use C and just do graphics routines in ASM. But really depends on my findings with regards to C performance on this device. Pure ASM may be required.

Reply 5 of 26, by DaveDDS

User metadata
Rank Oldbie
Rank
Oldbie

I don't have a "Pocket 8088", so I played around in DosBox, and find I have to set CYCLES down
to 230 (very low) in order to get a compile time of 8 seconds with Turbo-C - This is an original
TC 2.0 which must be different from the version you are using as the executable size is different.

// HELLO1 - Traditional "Hello world"
#include <stdio.h>
main()
{
printf("Hello world");
}
// HELLO2 - Eliminate "printf" (smaller)
#include <stdio.h>
main()
{
fputs("Hello world\n", stdout);
}

Just for fun I also tested my own Micro-C compiler

TC 2.0  HELLO1 : Elapsed time: 00:00:08.02
TC 2.0 HELLO2 : Elapsed time: 00:00:08.02
MC 3.03 HELLO1 : Elapsed time: 00:00:07.63
MC 3.03 HELLO2 : Elapsed time: 00:00:07.64

As you can see they are pretty close. To be fair: Micro-C is a pure compiler only
- my "CC" command (which is what I timed) calls:
MCP(Preprocessor), MCC(Compiler), MCO(Optimizer) TASM and TLINK

Code size is a tad different:

25-08-04  8:07a           61 R:\HELLO1.C
25-08-04 8:08a 1,334 R:\HELLO1.COM <- Micro-C
25-08-04 8:14a 6,504 R:\HELLO1.EXE <- Turbo-C
25-08-04 8:05a 70 R:\HELLO2.C
25-08-04 8:08a 551 R:\HELLO2.COM <- Micro-C
25-08-04 8:08a 4,792 R:\HELLO2.EXE <- Turbo-C

I do have an ancient "Poqet PC" which is an 8008... Hard to get stuff
on/off, but if you want, I can try to run the tests on it (Timing would
probably be by stopwatch).

Dave ::: https://dunfield.themindfactory.com ::: "Daves Old Computers"->Personal

Reply 6 of 26, by Jo22

User metadata
Rank l33t++
Rank
l33t++

@RetroPCCupboard I fully understand, but I think we should ask ourselves this question here:
Which (professional) programmer/developer in the 1980 enjoyed working on a slow development system on purpose? Or could afford working in slow-motion ?

I mean, if we had been a developer of the time, then did we have the free time to wait for hours until the PC had completed compilation of a software project?
How can we get any serious work done if every re-compile takes minutes or tens of minutes to finish?
I'm not even thinking about money, but workflow and mental health.

I would think (assume) that developers had two setups, one for development and one standard setup to simulate end-user configuration.
Or a single development PC with an accelerator board that can be disabled at will (via software or toggle-switch).

Edit: Example: https://forum.vcfed.org/index.php?threads/may … or-board.56371/

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 7 of 26, by DaveDDS

User metadata
Rank Oldbie
Rank
Oldbie
Jo22 wrote on 2025-08-04, 12:54:

@RetroPCCupboard I fully understand, but I think we should ask ourselves this question here:
Which (professional) programmer/developer in the 1980 enjoyed working on a slow development system on purpose? Or could afford working in slow-motion ? ...

I'm assuming the "Pocket 8088" is pretty slow - I used TC 2.0 a lot back in the late 80's and don't recall it being anywhere near
that slow - "Hello world" is pretty simple.... I used TC when I was initially developing Micro-C (which is a LOT more complex than
"Hello world") - I don't recall waiting "hours" for builds.

Also, to be fair - I think much of this was on a 286 class machine which would have been quite a bit faster
than at XT.

I don't recall if this was swithable in TC, but later when MC became my primary toolset, when working on something
that was "really big", I would often skip the optimization step during test builds as that could save a fair bit of time.

Dave ::: https://dunfield.themindfactory.com ::: "Daves Old Computers"->Personal

Reply 8 of 26, by Deunan

User metadata
Rank l33t
Rank
l33t

I'v read that 8088 was the IBM choice, reason being there were already some I/O devices for 8-bit data bus that could be easily adapted to XT. Going full 16-bit with 8086 requires the mobo to split transactions, which was not that easy with early mobo implementations that had mostly 74 series logic on it and maybe some PALs.

As for OPs questions, ASM is usually considerably faster, even compared to C. Especially on such slow and limited machines where every cycle counts and every register spill is dozen of cycles wasted. But the bigger the program, the more work it is to keep it optimized in assembly. At some point you'll just heave enough of tracking which registers get trashed where, and either start saving and restoring everything with PUSH/POP or even come up with your own system arch and decide to write everything according to some rules - what registers can get trashed, what needs save/restore, how to return values, etc. It is at this moment that you realize that C does that for you with way less work.

Now these early compilers are not great at optimizations but then again not much can be done on x86 until 386. The C compiler is supposed to write a code that is "competent", not nearly as good as pure assembly but also not horribly slow by tracking register usage and doing as much as can be done with them. You want to balance things, write C code (unless you really want all the performance and are prepared to spend time, a lot of time, writing code) but perhaps use some inline assembly or some external functions written in ASM for some specific tasks. Like say video memory manipulation if you also need plane or bank changes, etc.

My advice is go ahead and learn x86 assembly. Go as far with it as you can and you will hit that point I mentioned, and it'll simply force you to reconsider C. And at this point you'll learn both and also perhaps how to effectively use assembly with C. Most compilers should have an option to output the assembly of the compiled code in text form, use that to study just how badly optimized a single C procedure can be, but how it makes more sense to use C anyway as the program grows.

Reply 9 of 26, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Regarding Turbo C and the large EXE..
My father used to use Power C by MIX Software.
It creates rather compact code, especially if the C program is written in K&R C and without <stdio.h> in the header.

Edit: The 8088 mainboard was nothing to write home about in my eyes.
It was little different (if at all) to a minimalist CP/M single board computer from ~1976.
Students could have made a prototype of that IBM board at the summer vaccation, I think. Or radio amateurs, for the matter.
The only notable differences to an CP/M-80 system were, I think, a) 8080 substituted by 8088 b) addition of a firmware in ROM (BIOS/BASIC) and c) more RAM on-board.
The use of 1977 era Apple II style expansion slots was wise, too.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 10 of 26, by RetroPCCupboard

User metadata
Rank Oldbie
Rank
Oldbie
Jo22 wrote on 2025-08-04, 12:54:
@RetroPCCupboard I fully understand, but I think we should ask ourselves this question here: Which (professional) programmer/dev […]
Show full quote

@RetroPCCupboard I fully understand, but I think we should ask ourselves this question here:
Which (professional) programmer/developer in the 1980 enjoyed working on a slow development system on purpose? Or could afford working in slow-motion ?

I mean, if we had been a developer of the time, then did we have the free time to wait for hours until the PC had completed compilation of a software project?
How can we get any serious work done if every re-compile takes minutes or tens of minutes to finish?
I'm not even thinking about money, but workflow and mental health.

I would think (assume) that developers had two setups, one for development and one standard setup to simulate end-user configuration.
Or a single development PC with an accelerator board that can be disabled at will (via software or toggle-switch).

Edit: Example: https://forum.vcfed.org/index.php?threads/may … or-board.56371/

Yes. You are quite correct. I am not intending to develop anything complex on this device. On device programming was more intended for poking around and learning the basics.

But, when the XT first came out, there was no faster IBM compatible PC. Particularly if you are just programming a game as a hobbist you would use whatever PC you had.

If I try to do anything more complex I will probbaly use DOSBox or one of my faster Retro PCs, and periodically test the compiled executable on the Pocket 8086.

Reply 11 of 26, by RetroPCCupboard

User metadata
Rank Oldbie
Rank
Oldbie
DaveDDS wrote on 2025-08-04, 13:05:
I'm assuming the "Pocket 8088" is pretty slow - I used TC 2.0 a lot back in the late 80's and don't recall it being anywhere nea […]
Show full quote
Jo22 wrote on 2025-08-04, 12:54:

@RetroPCCupboard I fully understand, but I think we should ask ourselves this question here:
Which (professional) programmer/developer in the 1980 enjoyed working on a slow development system on purpose? Or could afford working in slow-motion ? ...

I'm assuming the "Pocket 8088" is pretty slow - I used TC 2.0 a lot back in the late 80's and don't recall it being anywhere near
that slow - "Hello world" is pretty simple.... I used TC when I was initially developing Micro-C (which is a LOT more complex than
"Hello world") - I don't recall waiting "hours" for builds.

Also, to be fair - I think much of this was on a 286 class machine which would have been quite a bit faster
than at XT.

I don't recall if this was swithable in TC, but later when MC became my primary toolset, when working on something
that was "really big", I would often skip the optimization step during test builds as that could save a fair bit of time.

Yes, it is definitely slow. When you do DIR command you can see it drawing the
directory listing.

The TC thats on this device came preinstalled on it. So I am not sure of its source.

I can at the press of a button double the speed of this machine to 10mhz. My testing at 4.77mhz was more of a worse case scenario.

Reply 12 of 26, by RetroPCCupboard

User metadata
Rank Oldbie
Rank
Oldbie
Deunan wrote on 2025-08-04, 13:09:
I'v read that 8088 was the IBM choice, reason being there were already some I/O devices for 8-bit data bus that could be easily […]
Show full quote

I'v read that 8088 was the IBM choice, reason being there were already some I/O devices for 8-bit data bus that could be easily adapted to XT. Going full 16-bit with 8086 requires the mobo to split transactions, which was not that easy with early mobo implementations that had mostly 74 series logic on it and maybe some PALs.

As for OPs questions, ASM is usually considerably faster, even compared to C. Especially on such slow and limited machines where every cycle counts and every register spill is dozen of cycles wasted. But the bigger the program, the more work it is to keep it optimized in assembly. At some point you'll just heave enough of tracking which registers get trashed where, and either start saving and restoring everything with PUSH/POP or even come up with your own system arch and decide to write everything according to some rules - what registers can get trashed, what needs save/restore, how to return values, etc. It is at this moment that you realize that C does that for you with way less work.

Now these early compilers are not great at optimizations but then again not much can be done on x86 until 386. The C compiler is supposed to write a code that is "competent", not nearly as good as pure assembly but also not horribly slow by tracking register usage and doing as much as can be done with them. You want to balance things, write C code (unless you really want all the performance and are prepared to spend time, a lot of time, writing code) but perhaps use some inline assembly or some external functions written in ASM for some specific tasks. Like say video memory manipulation if you also need plane or bank changes, etc.

My advice is go ahead and learn x86 assembly. Go as far with it as you can and you will hit that point I mentioned, and it'll simply force you to reconsider C. And at this point you'll learn both and also perhaps how to effectively use assembly with C. Most compilers should have an option to output the assembly of the compiled code in text form, use that to study just how badly optimized a single C procedure can be, but how it makes more sense to use C anyway as the program grows.

Thanks for all the tips. Sounds like you know your stuff! I find it incredible that games like Rollercoaster Tycoon were 100% assembly. Genius, or a mad man? 😀

Reply 13 of 26, by igully

User metadata
Rank Newbie
Rank
Newbie
RetroPCCupboard wrote on 2025-08-04, 09:33:

Turbo Assembler: 543 bytes

543 bytes is way too big for a simple "Hello World" in Turbo Assembler.
I don´t know what you are doing but the resulting executable file size should be less than 50 bytes without any "magic tricks".

Reply 14 of 26, by RetroPCCupboard

User metadata
Rank Oldbie
Rank
Oldbie
igully wrote on 2025-08-04, 15:13:
RetroPCCupboard wrote on 2025-08-04, 09:33:

Turbo Assembler: 543 bytes

543 bytes is way too big for a simple "Hello World" in Turbo Assembler.
I don´t know what you are doing but the resulting executable file size should be less than 50 bytes without any "magic tricks".

It is the sample hello.asm that came with Turbo Assembler. It just called tasm then tlink on it.

Reply 15 of 26, by igully

User metadata
Rank Newbie
Rank
Newbie

COMMENT #

Compiles under Borland TASM 4.0

TASM /M1 TEST.ASM
TLINK /T TEST.OBJ

HELLO.COM -> 23 file size
#

seg_a segment byte public
assume cs:seg_a, ds:seg_a

org 100h

program_code:

mov ah,9 ; DOS Services ah=function 09h
mov dx,offset disp_msg ; display char string at ds:dx
int 21h

int 20h ; terminate program without return code (with 00h)

disp_msg db 'Hello World', 0Dh, 0Ah ; string to display
db '$'

seg_a ends

end program_code

Reply 16 of 26, by RetroPCCupboard

User metadata
Rank Oldbie
Rank
Oldbie
igully wrote on 2025-08-04, 15:25:
COMMENT # […]
Show full quote

COMMENT #

Compiles under Borland TASM 4.0

TASM /M1 TEST.ASM
TLINK /T TEST.OBJ

HELLO.COM -> 23 file size
#

seg_a segment byte public
assume cs:seg_a, ds:seg_a

org 100h

program_code:

mov ah,9 ; DOS Services ah=function 09h
mov dx,offset disp_msg ; display char string at ds:dx
int 21h

int 20h ; terminate program without return code (with 00h)

disp_msg db 'Hello World', 0Dh, 0Ah ; string to display
db '$'

seg_a ends

end program_code

Umm, yeah. That is quite different. You are generating a COM I am making an EXE:

The attachment 20250804_164056.jpg is no longer available

Reply 17 of 26, by gerry

User metadata
Rank l33t
Rank
l33t

it is interesting to be reminded of compile times on old machines, we are used to things being very fast now. I do recall turbo c and turbo pascal both being 'fast' in comparison with other compilers of the time. The relatively slowness of the tc compile for 'hello world' is not linear with a larger program, in other words a program that does a few things, perhaps loads and converts file formats, will only take a little longer than a 'hello world' program

i remember writing some file splitting programs years ago that took a noticeable time to actually read and write the file parts, a reminder of much slower disk i/o back then, which may also be a factor in compilation

Reply 18 of 26, by RetroPCCupboard

User metadata
Rank Oldbie
Rank
Oldbie
DaveDDS wrote on 2025-08-04, 12:51:

I do have an ancient "Poqet PC" which is an 8008... Hard to get stuff
on/off, but if you want, I can try to run the tests on it (Timing would
probably be by stopwatch).

Don't go out of your way. But, if you are interested to find out, then it would be an interesting comparison

Reply 19 of 26, by RetroPCCupboard

User metadata
Rank Oldbie
Rank
Oldbie
gerry wrote on 2025-08-04, 15:49:

it is interesting to be reminded of compile times on old machines, we are used to things being very fast now. I do recall turbo c and turbo pascal both being 'fast' in comparison with other compilers of the time. The relatively slowness of the tc compile for 'hello world' is not linear with a larger program, in other words a program that does a few things, perhaps loads and converts file formats, will only take a little longer than a 'hello world' program

i remember writing some file splitting programs years ago that took a noticeable time to actually read and write the file parts, a reminder of much slower disk i/o back then, which may also be a factor in compilation

I guess it is like with games. These days even 60fps is seen as low. But in early PC days some games could be into the single digits of fps.

I rememeber in the early 90s running Windows 3.x on an 8mhz 286. I had to wait minutes for anything to load. But I didn't mind.