VOGONS


Creating 80186 Based System

Topic actions

Reply 100 of 120, by mkarcher

User metadata
Rank l33t
Rank
l33t
Jo22 wrote on 2024-12-14, 00:31:

The Alphatronic P50-2 by Triumph Adler did use an 80186, too.
For compatibility reasons, the integrated peripherals were not used.
It uses the external interrupt mode, though. Mame perhaps supports the P50-2 by now.

Do we have any idea why machines like this used the 80186 over the NEC V30? I would have guessed that the V30 is cheaper and 8086 mainboard designs were already done by most companies.

Reply 101 of 120, by Jo22

User metadata
Rank l33t++
Rank
l33t++
mkarcher wrote on 2024-12-14, 15:56:
Jo22 wrote on 2024-12-14, 00:31:

The Alphatronic P50-2 by Triumph Adler did use an 80186, too.
For compatibility reasons, the integrated peripherals were not used.
It uses the external interrupt mode, though. Mame perhaps supports the P50-2 by now.

Do we have any idea why machines like this used the 80186 over the NEC V30? I would have guessed that the V30 is cheaper and 8086 mainboard designs were already done by most companies.

Hi, I can only guess. Maybe because the NEC V20/V30 wasn't produced until March 1984.
I mean, we know how long the design phase can be.. Especially in Germany with the FTZ/ZZF/BZT testing of the time.

That Alphatronic mainboard perhaps had been finished in 1983 already, but the Alphatronic PC's production or sale hadn't been started before 1984.
And and that point, it was to late to change the motherboard. A redesign would have required months, delaying sale for equally long.

That being said, the 80186 did at least have a more capable instructions set.
The BIU wasn't much greater than that of 8086, but maybe the proprietary software was written in a way to take advantage of 80186.
That's just a wild guess, of course. We'd have to disassemble those "OEM" versions of CP/M-86 and DOS and the application software to be sure, I believe.

But maybe it's another reason, not sure. The intel chips could be second sourced, for example.
Siemens made some 80186, for example (I've got an Siemens 80286 in my first PC).
http://www.cpu-galerie.de/html/siemens80186.html

Or maybe it were political reasons? The NEC V20/V30 was a competing chip from Japan, wheras Intel was one of our American 'friends'.
Ok, maybe that's too far-fetched. But who knows?
Back in the day shady agreements had not seldomly being made behind closed doors.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 102 of 120, by mkarcher

User metadata
Rank l33t
Rank
l33t

We were talking V30 vs. 80186 with internal peripherals disabled

Jo22 wrote on 2024-12-14, 19:05:

That being said, the 80186 did at least have a more capable instructions set.

I don't think this is accurate. Until disproven, I keep my stance that both the 80186 and the V30 can execute all 80286 instructions that are not protected mode management instructions (or LOADALL). Actually, the V30 has a more capable instruction set, as it also includes BCD-oriented nibble-based string operations (IIRC) and an 8080 emulation mode.

Reply 103 of 120, by Jo22

User metadata
Rank l33t++
Rank
l33t++

^I think the INS/OUTS instruction is notable, though, at very least.
https://www.eeeguide.com/instruction-set-of-80186/

Edit:

I don't think this is accurate.

I meant to say the "80186 did at least have a more capable instructions set [in comparison to the basic 8086]".
That's why I mentioned the 8086 in the following line. I didn't mean to say that the NECs are inferior in any way.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 104 of 120, by mkarcher

User metadata
Rank l33t
Rank
l33t
Jo22 wrote on 2024-12-14, 21:27:

^I think the INS/OUTS instruction is notable, though, at very least.
https://www.eeeguide.com/instruction-set-of-80186/

It is. That's how I get more than 200KB/s application level throughput reading from a parallel port CD-ROM drive in my Turbo XT, with a V20 at 10MHz. I'm using an EPP capable parallel port, and the possiblity to run REP INSB really saves the day. See PDF page 73 (document page 12-28) in The NEC V20/V30 users manual. While NEC uses the name INM ("in multiple") instead of INSB ("in string byte"), uses the register name DW for DX and IY for SI, it is the same instruction with the same opcode.

According to the NEC datasheet, REP INSB operates at 8 clocks per cylce, which would yield a theoretical raw transfer rate of 1.25MB/s. This limited by the bus performance. My mainboard adds a waitstate to ISA cycles at 10MHz (measured it for memory cycles, maybe more for I/O cycles), dropping the theoretical maximum to 1.1MB/s. Then you need to subtract 7 to 10% to cater for the RAM refresh that takes over the bus for some time, which will result in a maximum burst performance of REP INSB of 1MB/s. I expect the 80186/80188 to hit the bus interface limit as well, so the REP INS/REP OUTS performance is not likely a significant factor in deciding V30 vs. 80186.

Reply 105 of 120, by Jo22

User metadata
Rank l33t++
Rank
l33t++
mkarcher wrote on 2024-12-14, 21:46:
Jo22 wrote on 2024-12-14, 21:27:

^I think the INS/OUTS instruction is notable, though, at very least.
https://www.eeeguide.com/instruction-set-of-80186/

It is. That's how I get more than 200KB/s application level throughput reading from a parallel port CD-ROM drive in my Turbo XT, with a V20 at 10MHz. I'm using an EPP capable parallel port, and the possiblity to run REP INSB really saves the day. See PDF page 73 (document page 12-28) in The NEC V20/V30 users manual. While NEC uses the name INM ("in multiple") instead of INSB ("in string byte"), uses the register name DW for DX and IY for SI, it is the same instruction with the same opcode.

Well done! 😃 And thanks for the info about the mnemonics.
I vaguely remember from my father (a Z80 fan) that Intel and Zilog had used different names, too.

Btw, could it be that the 80186 was the most affordable CPU of the new x86 generation at the time?

The 80286 was available since February 1982. On paper.
I haven't found any information about availability or pricing of 80186/80286 for 1983/1984.

Could it be that the 80186 was either lower in price or available in higher quantity (vs 286)?
Some sources on internet claim that 80186 has a CPU core that's comparably simple, also.

Maybe that means that 80186 production had better yield over 80286?
On other hand what about the integrated peripherals - were they easy to make? What error rate did they have?

Also, could it be that the 80186 PCs were made out of defective, cheaper 80186 CPUs, which merely had the CPU core remained being intact?

That's just pure speculation, of course. I just wonder, because Sinclair did use defective RAM chips for the ZX81 (they were higher capacity types, with the defective half being disabled).

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 106 of 120, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++
mkarcher wrote on 2024-12-14, 21:46:
Jo22 wrote on 2024-12-14, 21:27:

^I think the INS/OUTS instruction is notable, though, at very least.
https://www.eeeguide.com/instruction-set-of-80186/

It is. That's how I get more than 200KB/s application level throughput reading from a parallel port CD-ROM drive in my Turbo XT, with a V20 at 10MHz. I'm using an EPP capable parallel port, and the possiblity to run REP INSB really saves the day. See PDF page 73 (document page 12-28) in The NEC V20/V30 users manual. While NEC uses the name INM ("in multiple") instead of INSB ("in string byte"), uses the register name DW for DX and IY for SI, it is the same instruction with the same opcode.

According to the NEC datasheet, REP INSB operates at 8 clocks per cylce, which would yield a theoretical raw transfer rate of 1.25MB/s. This limited by the bus performance. My mainboard adds a waitstate to ISA cycles at 10MHz (measured it for memory cycles, maybe more for I/O cycles), dropping the theoretical maximum to 1.1MB/s. Then you need to subtract 7 to 10% to cater for the RAM refresh that takes over the bus for some time, which will result in a maximum burst performance of REP INSB of 1MB/s. I expect the 80186/80188 to hit the bus interface limit as well, so the REP INS/REP OUTS performance is not likely a significant factor in deciding V30 vs. 80186.

Very nice speeds. If I can figure that out on either my Sharp PC-4600 series (4640) or my Apco Turbo (PB VX88 clone twin) then I might get a *Shark drive or CF reader running faster than their original HDDs 🤣

(* It's like a zip drive, but way cooler, because Jaws 😜 )

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.

Reply 107 of 120, by mkarcher

User metadata
Rank l33t
Rank
l33t
Jo22 wrote on 2024-12-16, 01:22:

Well done! 😃 And thanks for the info about the mnemonics.
I vaguely remember from my father (a Z80 fan) that Intel and Zilog had used different names, too.

Yeah, the Z80 can execute 8080 code, as its instructions are a superset of the 8080 instructions. On the other hand, the Zilog Z80 assembler syntax looks vastly different from the Intel 8080 assembler syntax. The 8080 assembler syntax seems to be designed to be minimalistic, allowing a very simple assembler implementation. On the other hand, the Z80 assembler syntax is designed to be regular and more obvios to the reader. For exampe, the register pair H (high) and L (low) can contain a 16-bit pointer (it is what evolved into BX consisting of BH and BL on the 8086). The 8086 syntax uses \[bx\] to refer to the value pointed to bx. The Zilog Z80 syntax uses (HL) to use to refer to the value pointed by the pair HL. The difference between using parenthesis or square brackets to denote taking a memory value pointed to by a register seems quite minimal to me. On the other hand, the 8080 syntax uses a "virtual" register called "M" to denote the "memory" pointed to by HL. To be fair, the Z80 had to invent a new syntax to replace M, because it could also address using (DE), not only using (HL).

Furthermore, there are 16-bit increment instructions in the 8080 instruction set. They increment a register pair like BC or HL. In Z80 assembly, you write "INC BC" or "INC HL". In 8080 assembly, you write "INX H". The "X" stands for something like "extended" and is used in all opcodes that refer to register pairs, but the operand is just spelled "H". You have to know that H is paired with L to form a 16-bit operand, whereas the Z80 syntax is explicitly naming the full operand.

For people that are used to 8086 assembly syntax, it is way easier to learn the Z80 syntax than the 8080 syntax. Probably that's why the native assembler for the CPU in the Nintendo GameBoy uses Z80-like syntax, even though it is not an Z80, but a different 8080 variant developed by Sharp, which admittedly took some inspirations from the Z80.

Jo22 wrote on 2024-12-16, 01:22:
Btw, could it be that the 80186 was the most affordable CPU of the new x86 generation at the time? […]
Show full quote

Btw, could it be that the 80186 was the most affordable CPU of the new x86 generation at the time?

The 80286 was available since February 1982. On paper.
I haven't found any information about availability or pricing of 80186/80286 for 1983/1984.

Could it be that the 80186 was either lower in price or available in higher quantity (vs 286)?
Some sources on internet claim that 80186 has a CPU core that's comparably simple, also.

Clearly, the 80186 targetted at a lower end market than the 80286. The 80186 is an 8086 with integrated peripherals and a better execution unit (which indeed is similar if not identical to the 80286 execution unti) for "small computer systems", like a 16-bit home computer. On the other hand, the 80286 is a multi-tasking capable processor being able to address an unbelievable amount of 16MB of physical RAM with memory protection, designed for professional Unix workstations. This likely made a huge price difference. That's why I didn't ask about "why would anyone use an 80186 over an 80286?". We had the IBM AT since 1984, and XT clones until 1989, so there clearly was a market for lower end machines without the 80286 processor.

On the other hand, the V30 as "8086 drop-in replacement" which also had a more modern (and more complex, thus more clock efficient) execution engine than the 8086 could be installed in any 8086 design, delivering about the same performance as an 80186. As it omits the integrated peripherals, and it is a "clone" chip instead of an "original" chip, I'd guess the V30 was cheaper than the Intel 80186. It could well be more expensive than the 8086, as it clearly is the higher performance processor.

To me, the fact that the 80186 was earlier to the market than the V30 seems to most convincing factor. Some companies might have designed systems with the 80186 pinout and/or made deals with Intel about buying a lot of 80186 chips before the V30 was out. This is a good reason to keep going that way.

Reply 108 of 120, by RetroPCCupboard

User metadata
Rank Oldbie
Rank
Oldbie

I recall that my school computers (in UK) were all RM Nimbus PC-186. I seem to recall that they ran DOS and Windows. But I think it was custom versions of both.

Reply 109 of 120, by Jo22

User metadata
Rank l33t++
Rank
l33t++
RetroPCCupboard wrote on 2024-12-16, 13:51:

I recall that my school computers (in UK) were all RM Nimbus PC-186. I seem to recall that they ran DOS and Windows. But I think it was custom versions of both.

They were great! They were maybe the first PCs to make useful use of MS Windows 1.x! 😁

Btw, there's an article about Acorn Archimedes on Hackaday.com right now! That's another PC from UK.

mkarcher wrote on 2024-12-16, 07:59:

To me, the fact that the 80186 was earlier to the market than the V30 seems to most convincing factor. Some companies might have designed systems with the 80186 pinout and/or made deals with Intel about buying a lot of 80186 chips before the V30 was out. This is a good reason to keep going that way.

Yes, that makes sense.

Also, gratefully, the SoC versions of the NEC V20/V30 had been popular in the 90s, at the very least.
Palmtop PCs and highly integrated Turbo XTs had used NEC SoCs (V40 for example, Goldstar XT-IV or XT-4).

Btw, the 8080 emulation mode seems interesting.
As far as I understand, V20/V30 do map (or rename) the 8080 mnemonics to the corresponding 8086 mnemonics.

And that's why the NECs can't do Z80 instructions. They're not available in in the 8086 instructions set.

Another chip was going to fix this, the NEC μPD9002.
Unfortunately, the chip is sort of a mystery. Merely the NEC PC-88VA has used it.
This is such a sad loss to all of us, I think! 🙁

I often wonder how the IT landscape might have looked like if the "8080 emulation mode" had found its way into later x86 processors.
If for example, intel had made an agreement to license the "8080 emulation mode" technology.

In this altered timeline, the emulation scene might have started much sooner or had better stories of success early on.
If history was different, things like Gameboy or MasterSystem emulators had been possible on Turbo XTs (like it is in our timeline),
but also on PC/AT compatibles with 286 and 386 chips made by other companies had been used for hardware-assisted 8080 emulation.

From an historical point, this would have made sense.
Both the i8086 and i8080 are sister chips. It would only be natural if a descendent of the intel 8086 had offered binary compatibility with i8080.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 110 of 120, by Jo22

User metadata
Rank l33t++
Rank
l33t++
RetroPCCupboard wrote on 2024-12-16, 13:51:

I recall that my school computers (in UK) were all RM Nimbus PC-186. I seem to recall that they ran DOS and Windows. But I think it was custom versions of both.

By the way, in addition to MS Windows 1.x there seems to be both a version of Windows 2.1 and 3.0 for the RM Nimbus PC-186!

RM User Group:
https://www.facebook.com/story.php?story_fbid … 100064601235788

There's more information on thenimbus.co.uk as well as some blogspot site.

Of course, I do know very little about the RM Nimbus 186 but I must say it has some slight similarities to the PC-98 platform from Japan.
Both use non-IBM architecture and have their own software catalogue, both use expansion boards rather than ISA cards.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 111 of 120, by BitWrangler

User metadata
Rank l33t++
Rank
l33t++

Yah that must have been when I first saw Windows in real life, probably 2.1, on an RM Nimbus at school. However, it was a look but don't touch thing as they were in the business suite lab, and you had to be in that course to get anywhere near... so basically sneak in at lunchtime and gawp. Some ragtag bunch of RM 380Z, 480Z and BBC Masters were strung throughout the rest of the school, networked with a multi user DOS somehow.

Unicorn herding operations are proceeding, but all the totes of hens teeth and barrels of rocking horse poop give them plenty of hiding spots.

Reply 112 of 120, by rmay635703

User metadata
Rank Oldbie
Rank
Oldbie
kant explain wrote on 2023-12-22, 18:46:

There are a number of variants. 80188/86xl, 80c188eb ... a company even produced an 80187 math coproc at one point.

Any ideas what the point of the 80187 was?

The 8087 supposedly could be used instead. (With some constraints and interface challenges)

It released after the 186 was used in PCs

It basically functioned the same as a 287xl with no internal MMU (unlike 8087)
it was 16 bit only and thus wouldn’t work on the 80188 and certain revs of 186

I’m just curious what Equipment would have used it.

Last edited by rmay635703 on 2025-01-13, 20:31. Edited 2 times in total.

Reply 113 of 120, by mkarcher

User metadata
Rank l33t
Rank
l33t
rmay635703 wrote on 2025-01-13, 17:20:
Any ideas what the point of the 80187 was? […]
Show full quote
kant explain wrote on 2023-12-22, 18:46:

There are a number of variants. 80188/86xl, 80c188eb ... a company even produced an 80187 math coproc at one point.

Any ideas what the point of the 80187 was?

It released after the 187 was used in PCs

It basically functioned the same as a 287xl with no internal MMU (unlike 8087)
it was 16 bit only and thus wouldn’t work on the 80188 and certain revs of 186

As I understand it, the 80186 bus protocol (including FPU handling) is sufficiently similar to the 8086 bus protocol, such that the 8087 can be connected the the 8086 front side bus. Intel Application note AP-258 "High Speed Numerics with the 80186/80188 and 8087" shows how it is done, and it explains that some assistance from the 82188 bus controller chip is required. The idea of the 8086-8087 coupling is that the coprocessor is a bus master that reads and writes directly to memory after arbitrating for the front-side bus. The 8087 supports the 8086 protocol using a single wire for both requesting and granting the bus. This protocol was invented by Intel to save on pins of the 8086, as Intel targetted the very common DIP40 format for that processor, which was quite sparse on pins with respect to the demands of a 16-bit processor like the 8086. An important point of the 80186 was easy integration into embedded systems, and Intel gave up on trying to cram that thing into 40 pins, so the 80186 uses the more conventional HOLD request/HOLD acknowledge pin pair. The 82188 is able to translate from the single-wire protocol used by the 8087 to the conventional protocol used by the 80186.

The 80287 uses a completely different philosophy for interfacing the processor, so it also doesn't need to have an MMU, but it uses the MMU of the host processor. The 80287 is a 16-bit target on the local bus, and responds to I/O cycles to ports F8-FE (an external decoder is required), and it has a control line to make the 286 transfer data from the operand address of an FPU instruction to the data port of the 287. This means that all addressing and protection checking is done by the 286. Interestingly, the 80C187 (note the "C" in the model number indicating a CMOS manufacturing process) also uses the 287-type bus interface, not the 8087-type one. The datasheet of the 80C187 only describes interfacing the 80C186, not the 80186 without C, and indeed: The datasheet of the 80C186 also just refers to the 80C187 coprocessor, so:

The 80C187 did not need a release before the 80C186, because the earlier 80186 was designed to be used with an 8087. On the other hand, the 80C186 required the 80C187 processor. The 80C186 datasheet clearly states: "NOTE: 80C186 processing of ESC (numeric coprocessor) opcodes differs substantially from the 80186." Thus the question focus shifts over to: Why did we require an 80C187 when there already was a 80287XL? The answer seems to be that while all the 287XL derivatives have the "same interface" in that they respond to 16-bit I/O cycles, the FSB protocols of the 80(C)186, the 286 and the 386SX are subtly different in signalling I/O cycles, so the 287XL exists with slightly different bus interface units, each optimized to be easily connected to the corresponding CPU.

Reply 114 of 120, by rmay635703

User metadata
Rank Oldbie
Rank
Oldbie

Thanks, I was wondering what applications these would have gone into.

Considering PCs didn’t use 80c186 chips I am imagining some type of bespoke equipment or application or embedded equipment.

Modems, networking, printers, scanners all used the 186, just not sure what would have used floating point.

Reply 115 of 120, by mkarcher

User metadata
Rank l33t
Rank
l33t
rmay635703 wrote on 2025-01-13, 20:43:

Modems, networking, printers, scanners all used the 186, just not sure what would have used floating point.

PostScript interpreters might benefit from a numeric coprocessor. OTOH, 1MB address space is not very plentiful for a 300dpi (~9 megapixels per page) laser PostScript printer.

Reply 116 of 120, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Not quite 80186, but the 8086-based Olivetti M24 (AT&T 6300) had an excellent documentation.
The BIOS listing is available, too, it seems.

More information here:
https://www.forum64.de/index.php?thread/58083 … c-olivetti-m24/

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 117 of 120, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Hi again. There are people building XT motherboards using NEC V40.
How about building a NEC V50 based system, thus ?
The V50 is using 16-Bit data bus, similar to 80186.
https://www.cpu-world.com/CPUs/V50/index.html

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 118 of 120, by rmay635703

User metadata
Rank Oldbie
Rank
Oldbie
mkarcher wrote on 2025-01-13, 20:59:
rmay635703 wrote on 2025-01-13, 20:43:

Modems, networking, printers, scanners all used the 186, just not sure what would have used floating point.

PostScript interpreters might benefit from a numeric coprocessor. OTOH, 1MB address space is not very plentiful for a 300dpi (~9 megapixels per page) laser PostScript printer.

Many 300dpi printers had 1mb or less memory, the assumption being you wanted the 300dpi text and that most full pages were actually with margins 8x10 or smaller which does fit in.

Also I had a Panasonic Omniflex fast Thermal Light printer EPL-8018rat that most definitely had a 186 on the main motherboard, though admittedly it was more or less a 208dpi color printer

Reply 119 of 120, by Jo22

User metadata
Rank l33t++
Rank
l33t++
rmay635703 wrote on 2025-01-28, 22:38:
mkarcher wrote on 2025-01-13, 20:59:
rmay635703 wrote on 2025-01-13, 20:43:

Modems, networking, printers, scanners all used the 186, just not sure what would have used floating point.

PostScript interpreters might benefit from a numeric coprocessor. OTOH, 1MB address space is not very plentiful for a 300dpi (~9 megapixels per page) laser PostScript printer.

Many 300dpi printers had 1mb or less memory, the assumption being you wanted the 300dpi text and that most full pages were actually with margins 8x10 or smaller which does fit in.

Also I had a Panasonic Omniflex fast Thermal Light printer EPL-8018rat that most definitely had a 186 on the main motherboard, though admittedly it was more or less a 208dpi color printer

How cute! My HP LaserJet+ had an Motorola 68000! 😁

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//