VOGONS


First post, by Xebec

User metadata
Rank Newbie
Rank
Newbie

I wasn't sure which forum to post this, but I was curious if anyone had performed benchmarks of actual 16-bit code (i.e. booting DOS/Win 3.1, other legacy OSes) on more modern CPUs, to see if 16-bit performance is still improving?

For example, is a Sandy Bridge chip faster than Core 2 Duo in 16-bit code? How about a newer architecture like Skylake or Alder lake?

I imagine cache sizes and clock speeds are helping, but I'm curious if Intel's architecture changes in the last 15 years are actually still improving 16-bit code or if it's actually gotten slower with time as they focus on 32/64-bit performance.

FWIW I have access to a Haswell (4th gen) and Skylake (9th gen) chip and can perform some testing to give numbers on apps if people have recommendations or want to put some numbers together.

Thanks!

Reply 1 of 7, by fosterwj03

User metadata
Rank Member
Rank
Member

Modern CPUs run Windows 3.11 incredibly fast on bare metal, but I'm afraid that I've never done a benchmark comparison between different CPUs.

I can tell you that I've gotten Win 3.11 to run with full functionality on both an Ivy Bridge and Haswell Core i5. Like I said, but run very fast.

I use a Matrox G200 PCI for video and a Ensoniq ES1370 AudioPCI for sound on both platforms. I also use a SATA SSD for the boot drive.

Reply 2 of 7, by fosterwj03

User metadata
Rank Member
Rank
Member

I've also done some DOS 7.0 (Win95) bare metal testing on my Rocket Lake Core i7-11700. It acts really weird. It reports the base clock at 2.5 GHz (technically correct without the boost frequency), but benchmarks all over the place. I have to use HIMEMX instead of MS HIMEM because HIMEM doesn't access the memory architecture properly. EMM386 doesn't work at all. I suspect that Intel has further broken a lot of backward compatibility in their most recent chipset designs.

Reply 3 of 7, by TrashPanda

User metadata
Rank l33t
Rank
l33t
fosterwj03 wrote on 2021-12-24, 18:01:

I've also done some DOS 7.0 (Win95) bare metal testing on my Rocket Lake Core i7-11700. It acts really weird. It reports the base clock at 2.5 GHz (technically correct without the boost frequency), but benchmarks all over the place. I have to use HIMEMX instead of MS HIMEM because HIMEM doesn't access the memory architecture properly. EMM386 doesn't work at all. I suspect that Intel has further broken a lot of backward compatibility in their most recent chipset designs.

You are correct, Intel has done a ton of cleanup since Skylake and has been deprecating older uarc from their newer designs. A lot of the older interrupts have been cleaned too, it was honestly time for the older stuff no longer needed and only kept for legacy compatibility to be removed.

Reply 5 of 7, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Some "recent" PC architecture designs removed A20 Gate also.
However, it's unclear what that means. There used to be multiple mechanisms for A20 Gate.
The classic toggle via keyboard controller, for instance. Or the Fast A20 switch via chipset. Other mechanisms from the PS/2 architecture also existed.
Depending on the motherboard in particular, one or all of them might be usable.

Anyway, that's just an example that comes to mind.
The quirky A20 is a popular example, I guess.
However, it's necessarity is a bit overstated also.
DOS applications don't need it per se to work in most cases.
It's rather up to DOS and its memory manager to care about A20.
In VMs like Virtual Box, which have a broken A20, MS-DOS 6.2x runs as usual.

Speaking of compatibility, I have a different opinion.
The PCs backwards compatibility made it future proof.
Users since the mid 80s knew they could depend on that, so they did invest in x86 over the years, despite better systems being available at the time, such as ARM and 68k series systems. Or M1/M2 chips nowadays.
Considering that the majority of our society runs on digital systems that depend on PC architecture or x86 software, it's a dangerous game Intel plays.
There are many specialized systems running that are booted via BIOS/CSM.
And they must be developed for at some point, as well.
Of course, they are neither fancy nor very popular, so they barely appear in statistics.
Just think of machinery in the power grid, the waterworks, etc. They aren't consumer's electronics in the strict sense, they aren't visible.
Scrapping CSM and CPU instructions means that hardware must be archived, too, not just software.
Sure, there's emulation, also - Windows 10 for ARM has it, too, even. Thank goodness, otherwise things would have gone down the drain years ago. But emulation doesn't exactly behave like the real thing, and it can't substitute physical connections found on industry standard systems.
Some weird instructions or events aren't emulated yet by emulators written by hobbyists, also. Sure, these problems may seem negligible currently. But that's because the digital age is comparably young, still. And because backwards compatibility was strong until now.

Edit: Matth79 did beat me / was quicker than me. 😅

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 6 of 7, by TrashPanda

User metadata
Rank l33t
Rank
l33t
Jo22 wrote on 2021-12-24, 19:48:
Some "recent" PC architecture designs removed A20 Gate also. However, it's unclear what that means. There used to be multiple m […]
Show full quote

Some "recent" PC architecture designs removed A20 Gate also.
However, it's unclear what that means. There used to be multiple mechanisms for A20 Gate.
The classic toggle via keyboard controller, for instance. Or the Fast A20 switch via chipset. Other mechanisms from the PS/2 architecture also existed.
Depending on the motherboard in particular, one or all of them might be usable.

Anyway, that's just an example that comes to mind.
The quirky A20 is a popular example, I guess.
However, it's necessarity is a bit overstated also.
DOS applications don't need it per se to work in most cases.
It's rather up to DOS and its memory manager to care about A20.
In VMs like Virtual Box, which have a broken A20, MS-DOS 6.2x runs as usual.

Speaking of compatibility, I have a different opinion.
The PCs backwards compatibility made it future proof.
Users since the mid 80s knew they could depend on that, so they did invest in x86 over the years, despite better systems being available at the time, such as ARM and 68k series systems. Or M1/M2 chips nowadays.
Considering that the majority of our society runs on digital systems that depend on PC architecture or x86 software, it's a dangerous game Intel plays.
There are many specialized systems running that are booted via BIOS/CSM.
And they must be developed for at some point, as well.
Of course, they are neither fancy nor very popular, so they barely appear in statistics.
Just think of machinery in the power grid, the waterworks, etc. They aren't consumer's electronics in the strict sense, they aren't visible.
Scrapping CSM and CPU instructions means that hardware must be archived, too, not just software.
Sure, there's emulation, also - Windows 10 for ARM has it, too, even. Thank goodness, otherwise things would have gone down the drain years ago. But emulation doesn't exactly behave like the real thing, and it can't substitute physical connections found on industry standard systems.
Some weird instructions or events aren't emulated yet by emulators written by hobbyists, also. Sure, these problems may seem negligible currently. But that's because the digital age is comparably young, still. And because backwards compatibility was strong until now.

Edit: Matth79 did beat me / was quicker than me. 😅

A lot of what you talk about in the second part is mostly industrial PCs which still run on 486/586/Pentium based hardware, none of which is affected by Intel deprecating old Uarc and Interrupts in new designs since the old CPUs are still being made and thus will still be widely available in industrial channels. Industrial PCs dont require fast CPUs or the latest hardware in fact they want the older hardware designs that are mature and have nearly 100% stability with exceedingly long uptimes.

Having worked in Industrial environments for many years I dont see Intel deprecating things as an issue, the older mature stable hardware is always going to be available since there will always be a demand for it and Intel doesnt make any of it and hasn't for many years now. ....Yes you can still buy ISA industrial motherboards with 386 class SOCs on them that run at speeds no 386 should be capable of with voltages so low you could run the unit from a battery.

Reply 7 of 7, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Thanks for your understanding. 🙂
What I meant to say: It has begun. Long established compatibility as such is on a downwards spiral.

IMHO, strictly speaking, it kind of started with AMD's K10 (?) that dropped 3DNow! ten years ago alreafy.
That was the first time that a major chip maker dropped support for one of its instruction sets.

The future of 16-Bit instructions for Real-Address Mode is uncertain, once no operating system or application can use them anymore.
The PC AT BIOS, Option ROMs, the VGA BIOS etc provided a Real-Mode compatible interfaces that software can use.
EFI/UEFI don't provide this anymore, so there's no ordinary way to boot up using these instructions, reducing these instructions to special use cases like virtualizers.

Not only 16-Bit Real-Mode DOS (say MS-DOS 6. X), but also 32-Bit OSes like Windows 98 or XP (x86) use V86 to run 16-Bit applications in their VMs, by contrast, but require the BIOS or CSM itself to work.
Without BIOS or CSM, they normally cannot run.

Another thing that I find questionable is the matter of proportionality.
Are the reduction/saving of a few Megabytes of ROM space (CSM) or system files (NTVDM, Win NT) worth of sacrificing 40 years of interchangeability that the world was built upon?
It's not just about removing a few rarely used instructions, as it used to be with 3DNow!, it's about the very foundations of the x86 platform.
To use a metaphor, removing BIOS and the related Real-Mode instructions is akin to cutting the roots of a tree.

Well, at least in my opinion. Maybe because that's a generation thing or cultural thing also, not sure (I'm from old Europe). 🤷‍♂️
- For example, Just yesterday I saw a YT video about the US, that told me about 'passive agressiveness' - a strange concept I never even had heard of!
In my country, it's not rude per se to tell people if they do make some mistakes or violate the law.
Every citizen can/should make the other aware of his/her mistakes. Not just authorities. That's our kind of common sense - simply telling how it is. If someone rides his bike on the wrong side.. Just tell him/her!
As long as you do it in a calm, friendly or humorous way, why not? 😀

Also, the discarding of paper documents for the sake of comfort was very strange, too.
I wouldn't want that, I want to be able to store important documents in a drawer, too, as a backup. So in case criminal hackers mess up data of me and others online, or in case of desasters. That way, I could carry physical copies of important documents with me in my bag or suitcase. It's good not to solely depend on private companies that store these documents on their servers.

Anyway, long story short, I share the opinion that things aren't obsolete, as long as they have a use or a purpose.
Even if that purpose is needed as a fallback only.
That does not mean, of course, that my point-of-view or "way of life" is correct. Only time will tell. 🙂

Merry christmas. 🎅🎄

Edit: Some typos fixed. I'm using an Android device and
I am fighting the spell checker/auto-correction. 😭

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//