VOGONS


How to add more memory to a 286?

Topic actions

Reply 20 of 46, by Jo22

User metadata
Rank l33t++
Rank
l33t++
mkarcher wrote on 2024-07-10, 16:57:

The Windows 3.1 subsystem (WoW) was being limited to 16-Bit code, too.

I don't think that's entirely true. Windows 3.1 in 386 enhanced mode should be able to provide 32-bit memory segments to 16-bit applications. This means you should be able to execute 32-bit code in a 16 bit windows environment, but you would need to do stuff by hand (like loading 32-bit code), which is implemented in the operating system if you just need 16-bit stuff. I'd expect a lot of Win32s applications (if not all) don't need any kernel extensions to the 386 mode Windows 3.1 kernel.

I agree. It's just that to my knowledge the RISC ports of Windows NT 3.x started out with a software emulation of an 80286 processor.
Windows NT 4 had upgraded to 80486 level and could do all those things.
Normal Windows NT for x86 didn't impose any such processor related restrictions.
Speaking under correction, though.

Edit: I forgot to mention, I specifically meant the WoW of Windows NT on RISC here.
The x86 versions of Windows NT did have access to a "virtualized" host processor with its native instructions set.:
Ie, the WoW ran under control of NTVDM.

Edit:

mkarcher wrote on 2024-07-10, 16:57:
Jo22 wrote on 2024-07-09, 21:44:

The idea that an 286 PC is best to be used as an fast XT makes me depressive.

I don't think it is as bad as it is portrayed in this thread. It is true that if you want to use DOS on a 286, you just use it as a fast XT - but that is mostly the same on any computer. Even a Pentium 4 computer running DOS is "just a fast XT".

Maybe. Maybe it's just me, also. 🤷‍♂️ To me, it's just a bit of a waste, though. It doesn't do the hardware justice. 🙁

It's like buying an Gameboy Advance (Z80+RISC) solely to play original Gameboy games (Z80) without every playing native games on it.
The advanced hardware inside never gets being used, just sits there, idling forever.

Or, it's like using an VGA card to solely play CGA games all the time.

Or using merely the VGA chip of an highly advanced CAD board.
The other ~20 chips merely draw power for no reason.

It.. just doesn't feel right. Sure it works, but it's a waste.

(*The CGA/VGA thing would make sense if the VGA's memory in A segment had been utilized to increase conventional memory, at least.
Software packages like QRAM or QEMM had this capability, I think.
That way the VGA's video memory wasn't being completely wasted, at least.

Because, the memory between 640KB and 736KB could be used for conventional memory if merely CGA-only applications and text-mode applications had been used.
That's what OS/2 2.x and later versions had offered to virtualized DOS applications in the DOS settings, too.)

Edit: Or to put it this way, an AT with 1 MB or less feels like an PC or PC/XT that had been stripped from most of its memory and its drives.

I mean, let's imagine someone said "an IBM PC is just a fast ZX81. 64KB of RAM and a cassette tape drive is fine enough to run ROM BASIC".

Wouldn't this statement make some of you feel a bit sour or depressed?

If so, that's what I felt when I saw how the 80286 PCs had been used over the past decades.

I'm not feelinging angry or something, I'm just mildly sad. 😔

PS: Having just 64KB of RAM in an IBM PC goes away with all the nasty segment issues.
Comparable to not having to deal with A20 line when the IBM AT has just 640KB and no Extended Memory.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 21 of 46, by jakethompson1

User metadata
Rank Oldbie
Rank
Oldbie
mkarcher wrote on 2024-07-10, 16:57:

On 286 systems, DOS extenders often didn't provide sufficient advantage over real mode to be worth the hassle. The performance impacts of the switch-to-real-mode workaround is an actual issue if your application relies on using DOS services a lot. Different 286 systems had their own quirks with handling the A20 line and providing reset methods, so writing well-performing universally compatble DOS extenders was hard, but not impossible. There were the PharLap 286|DOS and the Rational Systems DOS/16M (the predecessor to the ubiquitous DOS/4G, best known for the variant bundled with the Watcom compiler as DOS/4GW). But most importantly, there was Windows 3.1 which likely contains the most practice-proven 16-bit DPMI host of all times.

I read elsewhere that Intel deliberately tried to nurture the development of 386 DOS extenders so as to make them widely available and cheap for developers, while making no such efforts for the 286 (for the obvious reason of the 286 being second-sourced). Do you think there is any truth to that?

Reply 22 of 46, by douglar

User metadata
Rank l33t
Rank
l33t
jakethompson1 wrote on 2024-07-10, 21:59:
mkarcher wrote on 2024-07-10, 16:57:

On 286 systems, DOS extenders often didn't provide sufficient advantage over real mode to be worth the hassle. The performance impacts of the switch-to-real-mode workaround is an actual issue if your application relies on using DOS services a lot. Different 286 systems had their own quirks with handling the A20 line and providing reset methods, so writing well-performing universally compatble DOS extenders was hard, but not impossible. There were the PharLap 286|DOS and the Rational Systems DOS/16M (the predecessor to the ubiquitous DOS/4G, best known for the variant bundled with the Watcom compiler as DOS/4GW). But most importantly, there was Windows 3.1 which likely contains the most practice-proven 16-bit DPMI host of all times.

I read elsewhere that Intel deliberately tried to nurture the development of 386 DOS extenders so as to make them widely available and cheap for developers, while making no such efforts for the 286 (for the obvious reason of the 286 being second-sourced). Do you think there is any truth to that?

A couple thoughts:

  • Intel was in a much different position with respect to software development when the 386 was out. It wasn't going to wait for IBM and Microsoft to make stuff happen.
  • The 286 had real mode and protected mode with very little overlap between the two, but the 386 had some real capabilities for extending DOS so there was something for Intel to make available with the 386

Reply 23 of 46, by mkarcher

User metadata
Rank l33t
Rank
l33t
jakethompson1 wrote on 2024-07-10, 21:59:

I read elsewhere that Intel deliberately tried to nurture the development of 386 DOS extenders so as to make them widely available and cheap for developers, while making no such efforts for the 286 (for the obvious reason of the 286 being second-sourced). Do you think there is any truth to that?

This story fits the picture quite well. Intel got convicted for anti-competitive behaviour for sponsoring printed advertisement of PC retailers under the condition that the whole advertisement does not contain any computer with a non-Intel x86 processor. Harris published a document praising the 286 to be more clock-efficient for 16-bit software than the 386SX. This shows that both the mindset of Intel and the market situation were fitting for the statement to be true. Furthermore, you actually see a lot of software relying on DOS/4GW, because this edition of DOS/4G was bundled with Watcom C++ in a way that felt essentially "for free". It would not surprise me at all if Intel sponsored the deal between Watcom and Rational System that allowed this to happen.

So: Your claim might either be true or it is a quite well-made conspiracy theory, I have no way do tell the difference unless someone digs up an anti-trust lawsuit concerning this claim.

Reply 24 of 46, by bakemono

User metadata
Rank Oldbie
Rank
Oldbie

Next time I order PCBs I'm probably going to do a 2MB/8MB memory board (30 pin SIMMs). Should be able to do conventional memory backfill and UMBs in an 8-bit slot, XMS in a 16-bit slot, and EMS if someone writes a driver. Though I doubt that every configuration will be available in one CPLD image without reflashing.

GBAJAM 2024 submission on itch: https://90soft90.itch.io/wreckage

Reply 25 of 46, by MrSVCD

User metadata
Rank Newbie
Rank
Newbie
bakemono wrote on 2024-07-11, 19:25:

Next time I order PCBs I'm probably going to do a 2MB/8MB memory board (30 pin SIMMs). Should be able to do conventional memory backfill and UMBs in an 8-bit slot, XMS in a 16-bit slot, and EMS if someone writes a driver. Though I doubt that every configuration will be available in one CPLD image without reflashing.

Ooooh...
This looks like what I am looking for, where can I get more info?

Reply 27 of 46, by maxtherabbit

User metadata
Rank l33t
Rank
l33t
bakemono wrote on 2024-07-11, 19:25:

Next time I order PCBs I'm probably going to do a 2MB/8MB memory board (30 pin SIMMs). Should be able to do conventional memory backfill and UMBs in an 8-bit slot, XMS in a 16-bit slot, and EMS if someone writes a driver. Though I doubt that every configuration will be available in one CPLD image without reflashing.

🤣 I have been toying with making one of these ever since you helped me troubleshoot my Adaptec AHA-1540 being unable to DMA to an 8-bit target in the UMA (EMS card)

Glad someone is doing it 😀

Reply 28 of 46, by MrSVCD

User metadata
Rank Newbie
Rank
Newbie

How does 16-bit reads differ from 8-bit reads on a 16-bit card?
I am familiar how 68000 does this, but from the little I have read it seems that the ISA-card decides how it is read.

Reply 29 of 46, by maxtherabbit

User metadata
Rank l33t
Rank
l33t
MrSVCD wrote on 2024-07-12, 18:10:

How does 16-bit reads differ from 8-bit reads on a 16-bit card?
I am familiar how 68000 does this, but from the little I have read it seems that the ISA-card decides how it is read.

the short answer is that the 16-bit ISA card has to assert either IOCS16# or MEMCS16# when it detects an address on the bus that is targeted at it, then the chipset on the motherboard has to sample the appropriate control line and initiate the correct-width transfer cycle

the timing of all this is... frustrating

Reply 30 of 46, by sqpat

User metadata
Rank Newbie
Rank
Newbie
bakemono wrote on 2024-07-11, 19:25:

Next time I order PCBs I'm probably going to do a 2MB/8MB memory board (30 pin SIMMs). Should be able to do conventional memory backfill and UMBs in an 8-bit slot, XMS in a 16-bit slot, and EMS if someone writes a driver. Though I doubt that every configuration will be available in one CPLD image without reflashing.

Very cool project. I started some work on an EMS 4.0 driver very recently due to annoyance that existing ones often suck or are broken. My full disassembly of the vlemm.sys driver for vlsi scamp is here: https://github.com/sqpat/SQEMM/blob/main/vlemm/vlemm.asm . It's pretty broken and also very slow anyway. I was going to build a driver out for SCAMP first then possibly port to other chipsets. I can't promise anything very complete, or too soon, but I'd love to eventually contribute a proper driver.

I'm also working on a 16-bit doom port which makes use of EMS 4.0/backfill, which is the driving force for this. There aren't any modern memory boards out there that support this (Not the lotech, and not the picomem. ) so it's currently dependent on boards and chispets or specific old memory boards that support these features. .

Reply 32 of 46, by mkarcher

User metadata
Rank l33t
Rank
l33t
maxtherabbit wrote on 2024-07-12, 17:13:

🤣 I have been toying with making one of these ever since you helped me troubleshoot my Adaptec AHA-1540 being unable to DMA to an 8-bit target in the UMA (EMS card)

Wow, that whole Adaptec busmaster thing is a huge can of worms which is not really open publicly. The letter after 1540/1542 does matter in that regard. I have seen a 386SX system wIth chipset-based EMS (TopCat in this case), and the AHA-1542CF was unable to do busmaster DMA into EMS properly. Every other byte was missing for reads from disk, and I did not even try writing to disk. On the other hand, the AHA-1542B works perfectly in the same configuration. Upon disassembly of the firmware on the card, I found that the 1542CF special-cases the UMA and forces 8-bit cycles in that range. It seems the onboard EMS does not like busmaster 8-bit cycles. The 1542B did not have that logic, and I don't know whether the busmaster DMA engine on that card is able to do 8-bit transfers for bulk contents at all.

Reply 33 of 46, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

Yuck.

OK, so, from what I gather, a "maximally efficient" EMS needs to be able to determine if the accesses are being forced into 8bit modes, and then service appropriately?

Reply 34 of 46, by mkarcher

User metadata
Rank l33t
Rank
l33t
wierd_w wrote on 2024-07-13, 09:29:

OK, so, from what I gather, a "maximally efficient" EMS needs to be able to determine if the accesses are being forced into 8bit modes, and then service appropriately?

The ISA target, like an EMS/XMS card may propose 16-bit cycles by asserting MEMCS16. If the source accepts that proposal, it will transfer odd-bytes-only cycles using D8-D15. A0 will be high and /SBHE will be asserted (low). If the proposal is not accepted by the source, it will drive an XT-style 8-bit cycle with A0 high, /SBHE deasserted (high) and data transferred on D0-D7. As long as the source is the processor, an ISA card can be sure the offer to perform a 16-bit transfer gets accepted. If the cycle is performed by the 1542CF, it will not be accepted in the UMA area. The 1542CF will perform 16-bit cycles into conventional memory and into extended memory, just not into the UMA (A000-FFFF).

EDIT (addendum): Also, if the source is the 8-bit DMA controller (channels 0..3), a 16-bit memory proposal will not be accepted. For this to work without any hardware support on the 16-bit card, the ISA specification mandates that this case is detected by the mainboard (i.e. SBHE# not asserted, A0 high, MEMCS16# asserted), and forward data between the low byte and the high byte. A memory card thus need not detect that case and handle it differently as long as the mainboard conforms to the spec. The issue described with the chipset-based hardware EMS seems to be a bug/limitation of that chipset when combining hardware EMS and busmastering.

Last edited by mkarcher on 2024-07-14, 10:58. Edited 1 time in total.

Reply 35 of 46, by wierd_w

User metadata
Rank Oldbie
Rank
Oldbie

That is very strange behavior on the part of the adaptec card.

Short of altering the cards logic, there probably isnt a solution either.

Any ideas why they treat the HMA / Upper Memory Area that way?

Reply 36 of 46, by jakethompson1

User metadata
Rank Oldbie
Rank
Oldbie
wierd_w wrote on 2024-07-14, 02:41:

That is very strange behavior on the part of the adaptec card.

Short of altering the cards logic, there probably isnt a solution either.

Any ideas why they treat the HMA / Upper Memory Area that way?

To be clear this would only occur with a physical address pointing to the upper memory area. When running on a 386 with EMM386, the physical address when doing DMA to a UMB would be somewhere up in extended memory and this would not occur.

As to why physical UMB addresses were hardwired as 8-bit, I don't know. If the system were set up so that some 8-bit memory cards erroneously had MEMCS16# asserted anyway, wouldn't the user have already encountered that issue during boot, long before the SCSI card gets into the picture?

Reply 37 of 46, by jakethompson1

User metadata
Rank Oldbie
Rank
Oldbie
mkarcher wrote on 2024-07-13, 07:34:
maxtherabbit wrote on 2024-07-12, 17:13:

🤣 I have been toying with making one of these ever since you helped me troubleshoot my Adaptec AHA-1540 being unable to DMA to an 8-bit target in the UMA (EMS card)

Wow, that whole Adaptec busmaster thing is a huge can of worms which is not really open publicly. The letter after 1540/1542 does matter in that regard. I have seen a 386SX system wIth chipset-based EMS (TopCat in this case), and the AHA-1542CF was unable to do busmaster DMA into EMS properly. Every other byte was missing for reads from disk, and I did not even try writing to disk. On the other hand, the AHA-1542B works perfectly in the same configuration. Upon disassembly of the firmware on the card, I found that the 1542CF special-cases the UMA and forces 8-bit cycles in that range. It seems the onboard EMS does not like busmaster 8-bit cycles. The 1542B did not have that logic, and I don't know whether the busmaster DMA engine on that card is able to do 8-bit transfers for bulk contents at all.

Is it possible this is a mistaken compatibility fix for the reverse issue with the "B" version, or is the "C" a cost-reduced design and they made an oversimplification when cost-reducing?

Reply 38 of 46, by Jo22

User metadata
Rank l33t++
Rank
l33t++
jakethompson1 wrote on 2024-07-14, 02:47:

As to why physical UMB addresses were hardwired as 8-bit, I don't know.
If the system were set up so that some 8-bit memory cards erroneously had MEMCS16# asserted anyway,
wouldn't the user have already encountered that issue during boot, long before the SCSI card gets into the picture?

Good question. Didn't some EMS boards use 8-Bit i/o despite being 16-Bit cards?
I vaguely remember either AST Rampage 286 or AST Rampage AT being one of them.

I mean, the UMA was called "adapter's segment" at one point; the location were expansion cards place their ROM chips and perform memory i/o.
So it might be some sort of a compatibility consideration. What ever this may be.

Another reason for omitting 16-Bit i/o might be the availability of shadow memory for Option-ROMs.
By late 80s/early 90s, there perhaps was no dire need to provide full 16-Bit i/o anymore (in UMA).
ROM code was expected to be shadowed, anyway. Circuits could be simplified without loosing performance.

Not sure about SCSI. Did SCSI host controllers perform any buffering etc?
SCSI datawith via popular 50 pin connector was 8-Bit anyway, at least.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 39 of 46, by Sphere478

User metadata
Rank l33t++
Rank
l33t++

This seems like the right place to ask, given the knowledgeable folks and the topic of the last few replies above.

So there is an issue with my TI SXL2 66 on my 386 motherboard with clock doubling. Locks the system whenever I enable it. Apparently it is said that it has to do with the dma of the aha 1542 Scsi controller I am using.

I admit to being pretty ignorant about this issue as I haven’t looked into it yet and haven’t had the system out for a while, eventually I would like to circle back to it, could some/one of you explain this to me? I am told that a different scsi controller may solve it.

The controllers are a little expensive, it would be nice to know for sure which one will work.

I think feipoa is using a aha 1520b with his SXL2

Sphere's PCB projects.
-
Sphere’s socket 5/7 cpu collection.
-
SUCCESSFUL K6-2+ to K6-3+ Full Cache Enable Mod
-
Tyan S1564S to S1564D single to dual processor conversion (also s1563 and s1562)