VOGONS


First post, by kolano

User metadata
Rank Oldbie
Rank
Oldbie

It seems the Win3mu Win16 emulation layer has made some good progress, and will shortly have it's source code released.

http://www.toptensoftware.com/win3mu/

Eyecandy: Turn your computer into an expensive lava lamp.

Reply 1 of 18, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Wow, thanks for the information! That's good news! 😀

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 2 of 18, by Santacruz

User metadata
Rank Newbie
Rank
Newbie
Jo22 wrote on 2018-06-10, 14:54:

Wow, thanks for suggesting Surfshark VPN to people and the information! That's good news! 😀

What can we expect when this happens? A lot more stuff coming from the community? That'd be awesome, if so.

Last edited by Santacruz on 2022-05-14, 10:42. Edited 2 times in total.

Reply 3 of 18, by Jo22

User metadata
Rank l33t++
Rank
l33t++
Santacruz wrote:

What can we expect when this happens? A lot more stuff coming from the community? That'd be awesome, if so.

In best case, we can get Win16 games that can take advantage of modern Windows feature (Win3mu maps old API calls to 32/64-Bit APIs).
True 16-Bit environment (286ish CPU core), high resolutions in 16:10 incl DPI scaling, direct access to MIDI synths via MCI, etc.

In worsr case, we find our selves beeing surrounded by a storm of converted Win16 games. 😉
In eihter way, the outcome will be rather positive, I belive.
Without relying on emulators and VMS, more people will play these classic games again.
And unaltered Win3x games are preserved within images of old sharware CDs and the internet archive.
They won't go away (still have a large personal, physical collection of such CDs.)

As far as stuff from the community is concerned.. I have no idea. 😅
But it can only be positive. Personally, I'm thankful for the author's decision.
It must have been a hard decision for him to make. We have to keep in mind that this project started out as a very personal thing.
If memory serves. he wrote it to make a friend a favor. What now is Win3mu was once intended to run one of his/her old favorite games.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 4 of 18, by root42

User metadata
Rank l33t
Rank
l33t

What is the difference of w3emu to wine? IIRC wine also emulates the 16 bit Windows API?

YouTube and Bonus
80486DX@33 MHz, 16 MiB RAM, Tseng ET4000 1 MiB, SnarkBarker & GUSar Lite, PC MIDI Card+X2+SC55+MT32, OSSC

Reply 5 of 18, by Jo22

User metadata
Rank l33t++
Rank
l33t++
root42 wrote:

What is the difference of w3emu to wine? IIRC wine also emulates the 16 bit Windows API?

Win3mu emulates the inner workings of Windows 3.0 (KERNEL, GDI, USER) and uses a simplified memory scheme.
It also convertes EXE files during the process. The memory scheme is quite interesting, I think.
It emulates the aspects of the memory of a 286 in protected-mode.
Memory isn't moved around, but pointers of memory locations are changed.
That apparently works surprisingly well, because Win3mu does only run one EXE at a tiime.
https://hackernoon.com/win3mu-part-6-memory-m … ent-289233ef351

In comparison to Wine, it isn't emulating the API but rather mapping functions between Win16 and Win32/64 in a 1:1 fashion.
I'm speaking under correction, of course. I couldn't test Win3mu yet.

By the way, I heard that Wine 3.1 started to drop Win16 support. Is that true ?
https://www.reddit.com/r/linux/comments/8t60m … leased/e16b017/

From what I read in Wine changelog, it looked more like a removal of DOS/V86 support.
Perhaps because it uses now DOSBox for that and because Linux kernals had issues with 16-Bit Protected-Mode code.
(There had been issued an offiical patch to "fix" that. Maybe Wine team will be independend of that and plans some changes.)

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 6 of 18, by NJRoadfan

User metadata
Rank Oldbie
Rank
Oldbie

The reason why Win16 support was removed from x64 versions of Windows is that Microsoft was likely using v86 mode to simulate the 64k memory segmentation used by the 286 and to easily isolate the processes from the rest of the system. Win3mu replicates this with its own emulator and memory mapper. I think WoW contained the rest of the core Windows 3.1x APIs, or at least specific bits that weren't also in Win32. The rest was passed/stubbed thru to native win32 calls. This is why Win16 programs were able to use native widgets and window styles.

Reply 7 of 18, by Jo22

User metadata
Rank l33t++
Rank
l33t++

That could be possible. Though Win16 apps were friendly when it came to different modes.
The 64K segments worked in real, protected and v86 mode. In its essence, Windows 3.x never required v86.
Windows 3.1 once ran on true 286es, after all. In addition, the NTVDM/WoW also existed in a fully emulated form (based on SoftPC I heard).
Microsoft could have had re-used some code bits from the NT4 days to make it run on x64. If they wished.
(So even if they failed at implementing 16-Protected mode code {x64 in Long Mode allows 286 Protected Mode code to be executed},
they could have gone that route.)

Windows 3.x was much more flexible than the DOS eco system, so things like VGA/XMS/EMS/int13h and port traps
could all have been left out. Wabi for Linux did similar things. It ran a modified Windows 3.1 kernal without DOS.
Thanks to todays processor speeds, an un-optimized 286 emulation would have been more than enough.
(Almost too fast, in fact. An intelligent, software-controlled brake, like in a NES emulator, would be desirable).

Anyway, your point is still valid, I believe. In a real world scenario, no one misses Win16;
except for a few nostalgics or old-school gamers like you and me. ^^

Edit: Edited. Some typos fixed, etc.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 8 of 18, by root42

User metadata
Rank l33t
Rank
l33t

That's why I have Windows 3.1 running on my 286. 😀 So I am not dependent on Microsoft to support Win16 in Windows 10! 😁

YouTube and Bonus
80486DX@33 MHz, 16 MiB RAM, Tseng ET4000 1 MiB, SnarkBarker & GUSar Lite, PC MIDI Card+X2+SC55+MT32, OSSC

Reply 9 of 18, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Me, too! Thanks for your comment! 😁
The 286 may have been clumsy at times, but it was a very clean design also.
Ironically, Windows 3.1 could have had virtual memory if MS only put more efforts into it.

A lot of people don't know that the 286 MMU had the power to address up to 1GB of virtual memory.
Unfortunately, it lacked the ability to support "swap to disk" on its own. It required software assistance to do so.
And because MS already had an alternate kernal for the 386, they likely scrapped the idea.
That 386 kernal was dubbed "386 Enhanced Mode", so it became clear that MS saw it as superior.
On the other hand and confusingly, Standard Mode was implemented after 386 Enhanced Mode.
In the days of Windows /386, it didn't exist yet.

If an user wanted better experience, they were expected to make the move to the "ehanced" 386 platform.
Sure, it's understandable. If we look at Win32s, the "hidden" Windows 3.1 flat memory API and VXD drivers, the benefits are clear.
Still.. The 286 and Standard Mode were never really treaded as a good as they have had should.
Which is also ironic, because they were the most stable and reliable pieces.

To this day, the 286 protected mode code still works on modern processors such as i7 or Ryzen,
whereas the Win3.1 386 kernal with its all its bells and whistles goes bonkers at times. 😉
giphy.gif

That's why I sometimes nicknamed it as "enchanted mode", by the way. Because it often reflected its behavior in reality. 😉
On top of that, the 286 kernel worked on virtualizers and served as a sub-system in countless GUIs/OSes.
OS/2 supported it in Win-OS/2 originally, whereas DesqView/X could run it side-by-side in a window along with X11 programs.

Looking backwards, perhaps only OS/2 1.x took 286 support to its fullest. It supported virtual memory and the ring schemes.
Windows 3.1 became its second largest supporters, bit hundreds of thousands of applications using 286 instruction code.
Even by the late 90s, lots of applications where written in Visual Basic 1-3 or Turbo Pascal for Windows/Delphi 1.x,
making use of the additional 186/286 instructioncs or at least supporting them optionally (TPW 1.x did support both 8086 and 286 code generation).

Edit: Sorry for the many edits. I've got some headache and do lack concentration. 😅

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 10 of 18, by crazyc

User metadata
Rank Member
Rank
Member
Jo22 wrote:

The 286 may have been clumsy at times, but it was a very clean design also.

IMO, the 286 was a mess. They tried to bring ideas from the iAPX432 project to the 8086 architecture to make them more accessible (every object has it own segment with specific privileges etc..) but without the more advanced capabilities and descriptor types. By limiting the total number of descriptors per table to 8192 and never increasing it, it quickly became impossible to create enough segments to make that model work. Of course, the 432 was a bigger mess and was quickly eclipsed by the 386 and most OS's dumped the segment model.

Unfortunately, it lacked the ability to support "swap to disk" on its own. It required software assistance to do so.

Ehhh, it's not that much different. Just substitute page faults with segment not present faults. You'd need to be careful of the fact the while pages can't overlap (they can be aliased though) segments can.

Reply 11 of 18, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Thanks for reply! 😀
I'll try to respone to what you said.Please forgive my ignorance, though. I'm no developer.

crazyc wrote:
IMO, the 286 was a mess. They tried to bring ideas from the iAPX432 project to the 8086 architecture to make them more accessib […]
Show full quote
Jo22 wrote:

The 286 may have been clumsy at times, but it was a very clean design also.

IMO, the 286 was a mess. They tried to bring ideas from the iAPX432 project to the 8086 architecture to make them more accessible (every object has it
own segment with specific privileges etc..) but without the more advanced capabilities and descriptor types. By limiting the total number of descriptors per
table to 8192 and never increasing it, it quickly became impossible to create enough segments to make that model work. Of course, the 432 was a bigger
mess and was quickly eclipsed by the 386 and most OS's dumped the segment model.

Well, I understand what you're pointing to, but wouldn't call it a mess. It's 1982 era technology and was quite advanced for its time. 😀
We have to keep in mind that it shared the same 70s technology of the 8086 and that the 8-Bit 8080 was developed just recently before it.
So comparing it to the 432 or i386 isn't fair, I think. The 286 was born out of desperation and had to be designed under time pressure.

It also predated the IBM PC and MS-DOS, so the inability to revert to Real-Mode, as other people often complain about, was a non-issue.
It wasn't designed for PCs, after all, but professional applications. It became a PC's processor, because nothing else 80X86-ish was available at the time.

In my opion, the use of segmentation isn't bad at all. Sure, it was/is tiresome to handle for the human brain,
so a lot of coders and demoscene guys hated it like nothing else.

However, if a set of highl-level libraries was used, it wasn't bad at all. It allowed for memory protection, for example.
All the "buffer overrun" issues that appeared in the last few years made developers finally realize how useful the
separation of code and data segments can be. 😉

That's why techniques like DEP and NX-Bit/XD-Bit were developed.
They were attempts to bring back what was lost during the flat memory craze of the past.

Also, the 286 chip design is well structured: Every piece of silicon serves a single, well documented purpose.
All signals are accessible to the 286 front side bus (which became the ISA Bus).

"The later E-stepping level of the 80286 was a very clean CPU, free of the several significant errata that caused problems for programmers
and operating system writers in the earlier B-step and C-step CPUs (common in the AT and AT clones)
". Source).

IMHO this is in stark contrast to newer architectures, where a huge number of units are interconnected to each other inside the chip.
In comparison, this makes a flowing diagramme of the 286 easy understand. It can also be be analyzed by a signal probe or debugger more easily.
That's one of the reason the Z80 or 8085 (and Pascal language) was used in Computerteaching classes for so long.

That beeing said, everything is relative. What I wrote was no criticism in any way whatsoever.
People have different opinions about certain things and have different expectations to certain things.
So there's no definite "right" or "wrong" onto them. That's good, I think.

Edit: Your comment about limitation of the total number of descriptors per table is interesting and I won't disagree.
It could have been implemented more elegant, I think. Perhaps it was too early and the 286 design team didn't imagine
that such huge numbers would be used in reality. A few years later 640KB ought to be enough for everyone, after all. 😉

crazyc wrote:
Jo22 wrote:

Unfortunately, it lacked the ability to support "swap to disk" on its own. It required software assistance to do so.

Ehhh, it's not that much different. Just substitute page faults with segment not present faults.
You'd need to be careful of the fact the while pages can't overlap (they can be aliased though) segments can.

You're right, I guess. The CPU was able to do certain operations if the software developers in question were capable enough.
The OS/2 folks did manage to implement a working virtual memory, after all.

By the way, this reminds me a lot of how DEQSView did *wonders* on a plain 8086 processor! 😁
Preemptive multitasking with dedicated memory trough the use of EMS, printer spooling, etc.

What I mean to express is that the 286 lacked abilities MS wanted to take advantage of.
So they didn't and reserved features for 386 users.

For example, the 386 Windows kernal chops of 64KB segments into 4K chunks. If you're running a Win16 program in 386 mode,
it will have no clue about that. It continues to operate in 64KB segments; the translation is done transparently.
That's one of the biggest limitations of the 286 MMU: it can't do memory-remapping in the fashion the 386 MMU could do.

Also, the segments size is fixed to 64KB on the 286. It requires a 386 or higher to use methods like flat-memory
(which uses one huge 4GB segment to "disable" segmentation). One of the reasons LIMulators can't make use of
the 286 as they liked to (EMM286 copies memory, since it can't re-map it).

Edit: Several edits.
Edit: I said reallocation instead of re-mapping. What was nonse, of course. Sorry.

Last edited by Jo22 on 2018-07-06, 19:52. Edited 1 time in total.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 12 of 18, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Just found out the very first 386 did behave similar to the 286 in a few ways.
It wasn't designed to go back to real mode, for example. Funny how different reality can be from documented history, sometimes. 😀
It's also documented in the Intel iAPX 386 Architecture Specification, revision 1.8 (June 1985).
See http://www.os2museum.com/wp/a-brief-history-of-unreal-mode/

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 13 of 18, by crazyc

User metadata
Rank Member
Rank
Member
Jo22 wrote:

It's 1982 era technology and was quite advanced for its time. 😀 We have to keep in mind that it shared the same 70s technology of the 8086 and that the 8-Bit 8080 was developed just recently before it.
So comparing it to the 432 or i386 isn't fair, I think. The 286 was born out of desperation and had to be designed under time pressure.

The main competition when the 286 was released was the 68010 (68020 was also released before the 386). There's a reason why there are many more different machines which use the 010 and 020 than the 286.

Jo22 wrote:

That beeing said, everything is relative. What I wrote was no criticism in any way whatsoever.
People have different opinions about certain things and have different expectations to certain things.
So there's no definite "right" or "wrong" onto them. That's good, I think.

Sure, this is all opinion. None of us were there for the 286 design discussion. And don't get me wrong I still like the 286, the first PC I had access to was a 286, I still have the motherboard from it in storage.

Reply 14 of 18, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Thanks! I'm currently reading Intel 80286 CPU: Real Mode Emulation
I suppose that's about what FlexOS and Concurrent DOS 286 used to multitask MS-DOS programs on a 286.
It's part of the 286 reference at https://www.pcjs.org/pubs/pc/reference/intel/80286/
That site also has information about the 386 at https://www.pcjs.org/pubs/pc/reference/intel/80386/
Anyway, I don't mean to go too much off-topic here. The thread was/intended to talk about Win3emu. 😅
I just mentioned them earlier, because Windows 3.1 has 286/386 kernals and because Win3mu uses 286 emulation.

crazyc wrote:

The main competition when the 286 was released was the 68010 (68020 was also released before the 386).
There's a reason why there are many more different machines which use the 010 and 020 than the 286.

Yes, the 68010 was quite comparable to the 286. Also in terms of features. Developers seemed to like it,
because it was more elegant to programm and caused less headaches to them.

I suppose it was the backwardscompatibility of 8086/286 that made x86 to so popular in the early days.
The ability of porting over 8080/Z80 code from the CP/M era was very important to companies.

Afaik, WordStar used a semi-automatic translation which ported CP/M-80 WordStar to MS-DOS (x86),
since it was discoered that the DOS version made wide use of the CALL5 interface (DOS had/has a built-in CALL 5 to Int21 "wrapper"),
which provided CP/M-80's entry addresses and other stuff. Ironically, CP/M-86 lacked that ability.
Its API calls (or ABI calls) where different than the original CP/M (CP/M-86 used int 0E0h).

That beeing said, I'm no dev. It gets very technical from here. A20 Gate and address wrapp-arounds, etc, just to name a few.
Oddly enough, CALL5 in DOS also seems to work with an intact A20 Gate (full address space), because it was once designed in a hard-core fashion no one really understands anymore (assembly magic).

Edit: Small edit in respect to A20.

crazyc wrote:
Jo22 wrote:

That beeing said, everything is relative. What I wrote was no criticism in any way whatsoever.
People have different opinions about certain things and have different expectations to certain things.
So there's no definite "right" or "wrong" onto them. That's good, I think.

Sure, this is all opinion. None of us were there for the 286 design discussion. And don't get me wrong I still like the 286, the first PC I had access to was a 286, I still have the motherboard from it in storage.

Hi, thanks for understanding. 😀 Among the 386, I also like 68k and Z80 designs. Infact, there's none I dislike so far.
The 286 is just special to me, partially because it was so missunderstood and underrated by people of the past
(not you, I'm gernerally speaking), and because it had go so much software support when it was considered "dead"
(Windows 3.1 gave it a second life; despite the fact that by 1992 the 286 had just passed its prime and 32-Bit code
and DOS4GW extenders took place in the gaming world).

Last but not least, by the mid-90s, it was perhaps the most modern 16-Bit (24-Bit address space/16-Bit code) processor
and then very last 16-Bit processor still in common use.
286 chipsets were also interesting, because they had to be intelligent in order to do things the 286 was unable to do on its own.
That makes them quite interesting to me. Of couse, some of the later 386/486 chipsets were also sophisticated in some ways.
Not sure however, if these were dual 286 and 386SX/486SX chipsets internally or new designs not based off the 286 chipset era.
As I said, the 286 and 386 were very similar in some ways. I wouldn't be surprised if the had the same parents at some point.

Last edited by Jo22 on 2018-07-06, 19:50. Edited 1 time in total.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 15 of 18, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Going back to Win3mu, there's another benefit it has over a real installation of Windows 3.1:
It doesn't suffer from a jumpy mouse arrow:
http://www.os2museum.com/wp/jumpy-ps2-mouse-i … de-windows-3-x/

That was an issue 386 Enhanced Mode suffered from. Sure, serial mice aside there was Standard Mode, too..
But unfortunately, some sound and graphics drivers refused to work with that (PAS16 did work, as did Paradise VGAs).
They required Enhanced Mode (or more precisely, the 386 kernal and VXD support).

That's one of the things I could imagine beeing an improvement over a real copy of Windows 3.1. 😀
Win3emu doesn't have such restictions. It can use anything the modern Windows uses.
This includes MCI (not implemented yet), high resolutions/colour depths, network drives and so on.

Edit: Minor edit. I meant to say PAS16/PVGA did work wih Standard Mode.

Last edited by Jo22 on 2018-07-06, 19:48. Edited 1 time in total.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 16 of 18, by collector

User metadata
Rank l33t
Rank
l33t

Win3mu source has now been released: https://www.toptensoftware.com/win3mu/

Source here: https://bitbucket.org/toptensoftware/win3mu/src/master/

The Sierra Help Pages -- New Sierra Game Installers -- Sierra Game Patches -- New Non-Sierra Game Installers

Reply 17 of 18, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Thank you very much for the news! I really hoped that this would happen eventually. 😀

Despite the few alternatives that showed up recently. the design of Win3mu is very (!) interesting, I think.
That unique 286/pseudo protected-mode emulation aside, someone can learn a lot of the inwards of Win16 also.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 18 of 18, by collector

User metadata
Rank l33t
Rank
l33t

I would like to try this, but would have to install a later version of VS to compile and I see no binaries. Win3mu is the one that I think that has the most promise, at least in the Win platform. Boxedwine for non win users. I am keeping my eye on WineVDM, too.

The Sierra Help Pages -- New Sierra Game Installers -- Sierra Game Patches -- New Non-Sierra Game Installers