VOGONS


Reply 40 of 46, by mkarcher

User metadata
Rank l33t
Rank
l33t
Horun wrote on 2022-09-12, 03:43:

So in laymans terms the card is not a caching controller but a VLB multi I/O IDE HD controller with XMS/EMS capability ?

To allow EMS/XMS as valid description, we have to make gratitious use of "layman terms". EMS and XMS describe two particular ways for software to interact with the memory hardware. In the case of EMS, there is a hardware-specific driver that gets software requests to select certain areas of memory and programs the hardware accordingly. For XMS, there is basically just one driver, HIMEM.SYS, which is hardware-independent, because all memory boards that provide memory to be used using the XMS programming interface implement the same hardware interface called "extended memory". The XMS specification is written in a way that it is basically impossible to implement XMS with hardware that doesn't provide the memory as "extended memory".

The VL-200 does not implement the "extended memory" hardware interface, so it can't be used to provide memory to XMS application. Also, the VL-200 can not perform the operations in hardware that software could request using the EMS programming interface, so it also can't be directly utilized by software that requires EMS.

Nevertheless, in the grand scheme, the VL-200 (and VL-230) do provide extra RAM to the computer that can not be directly accessed by real-mode DOS applications. In that way, it is similar to EMS and XMS, which also can not be directly accessed from the processor in real mode. (I don't go into details (like the HMA) on purpose).

Reply 41 of 46, by AlexZ

User metadata
Rank Member
Rank
Member

There is a much more simple explanation. VL-200 can't be used to support EMS capability because it can only map a single 32KB region. EMS 3.0 allowed single 64KB region, EMS 3.2 allowed 4x 16KB regions and EMS 4.0 allowed mapping regions in the whole 1MB memory. EMS 4.0 thus allowed primitive multi tasking when the running program below 640KB could be "swapped out" without losing application state.

The hardware mechanism to allow mapping memory through ISA was already in place for a long time as it was needed to run BIOSes on ISA expansion cards (e.g video BIOS). With EMS instead of BIOSes, the real RAM on expansion card was being accessed through ISA. For this reason it was originally the simplest mechanism to expand memory on 86/286 motherboards at time when RAM sockets didn't exist. EMS cards were large and obviously memory chips on them wouldn't fit on motherboard.

EMS only made sense to real mode programs otherwise restricted to 640KB. Those were typically programmed for 86-386. Protected mode got popular with proliferation of 486 as early ones had 4-8MB RAM. 386SX suffered huge performance penalty for 32bit code that made it impractical. EMS was still relevant in 386 times as those usually had just 2MB RAM, not sufficient to justify using protected mode. When EMS is provided through software solution like EMM386, the CPU needs to be in Virtual 8086 mode for EMM386 to be able to access memory above 1MB and pretend to be in real mode.

Last edited by AlexZ on 2022-09-12, 18:08. Edited 1 time in total.

Pentium III 900E, ECS P6BXT-A+, 384MB RAM, NVIDIA GeForce FX 5600 128MB, Voodoo 2 12MB, 80GB HDD, Yamaha SM718 ISA, 19" AOC 9GlrA
Athlon 64 3400+, MSI K8T Neo V, 1GB RAM, NVIDIA GeForce 7600GT 512MB, 250GB HDD, Sound Blaster Audigy 2 ZS

Reply 42 of 46, by mkarcher

User metadata
Rank l33t
Rank
l33t
jakethompson1 wrote on 2022-09-12, 00:11:

Or would the delay line have the advantage of making those timings independent of CLK since it could vary anywhere from 4.77 to 10 MHz or higher (or rethinking this, the DRAM access timing diagram is much more fine-grained than the 210 ns resolution of the 4.77 MHz clock so I guess it gets around that problem)?

Both are good points for using the digital delay line as clock generator. They were very common and likely cheap those days. OSC would at least be clock speed independent, but still only provides a 70ns grid. Well, the CGA card actually did derive the RAM timing from OSC, but OTOH we know very well that the CGA card RAM performance isn't meant to compete with 1WS or even 0WS 16-bit AT memory boards. Take a look at the oldest original 16 bit RAM card, the IBM 128KB memory expansion option (to expand the AT mainboard from 512KB on-board memory to 640KB conventional memory): http://www.minuszerodegrees.net/misc/5170_mem … _board_128K.jpg . On that board, U18 is a delay line. The latest IBM AT memory option, the Enhanced Memory Expansion Adapter (see http://www.minuszerodegrees.net/5170/cards/IB … n%20Adapter.jpg ) also uses a delay line for memory timing, in this case, it's U20.

Reply 43 of 46, by mkarcher

User metadata
Rank l33t
Rank
l33t
AlexZ wrote on 2022-09-12, 18:04:

EMS only made sense to real mode programs otherwise restricted to 640KB. Those were typically programmed for 86-386. Protected mode got popular with proliferation of 486 as early ones had 4-8MB RAM. 386SX suffered huge performance penalty for 32bit code that made it impractical. EMS was still relevant in 386 times as those usually had just 2MB RAM, not sufficient to justify using protected mode.

Don't forget Windows 3.1 in standard mode! This system operates the 286 or 386 processor in 16-bit protected mode, allowing seamless access to extended memory, and at the same time not paying the penalty for 32-bit code on the 386SX. If you wanted to continue to use your EMS-enabled (Turbo-)XT 8088 software, of course you still needed EMS and as you want to avoid the performance penalty resulting from running in virtual-8086 mode, 16-bit hardware EMS boards (or 386SX mainboards that implemented EMS using their on-board RAM) were indeed still useful.

Reply 44 of 46, by dionb

User metadata
Rank l33t++
Rank
l33t++

Little New-Year bump. Looks like I have myself a VL-230 aka EX3135.

Same big "HT2000" chip as the one already posted but one difference: the 512kB DRAM chip is populated with something at least looking superficially like a DRAM chip (TI TMA5416DZ according to the printing on it). Also the IDE controller is labeled as an Adaptec AIC-25VL01Q, and there's no sticker on the EPROM either. No other identifying marks either, I only identified it searching for the HT2000 and finding this topic.

These differences don't seem to make any difference, behaviour is as already described. Not bad for a generic VLB I/O card, but not exactly cached either.

Reply 45 of 46, by mkarcher

User metadata
Rank l33t
Rank
l33t
dionb wrote on 2023-01-04, 20:24:

Same big "HT2000" chip as the one already posted but one difference: the 512kB DRAM chip is populated with something at least looking superficially like a DRAM chip (TI TMA5416DZ according to the printing on it).

If the VL-230 BIOS works like the VL-200 BIOS, the POST will fail if there is no working RAM on that card, so most likely it does not just look like a RAM chip, but it actually is one. Having the thing pass POST without any RAM installed would have made the fakeness too obvious, I guess.

Reply 46 of 46, by dionb

User metadata
Rank l33t++
Rank
l33t++
mkarcher wrote on 2023-01-04, 23:02:
dionb wrote on 2023-01-04, 20:24:

Same big "HT2000" chip as the one already posted but one difference: the 512kB DRAM chip is populated with something at least looking superficially like a DRAM chip (TI TMA5416DZ according to the printing on it).

If the VL-230 BIOS works like the VL-200 BIOS, the POST will fail if there is no working RAM on that card, so most likely it does not just look like a RAM chip, but it actually is one. Having the thing pass POST without any RAM installed would have made the fakeness too obvious, I guess.

Hmm, will give it a try jumpered to disable the chip, that should kill POST in that case.

Pretty expensive to fake a cache controller using real RAM for nothing more than that POST. But I suppose the consumer paid for it...