First post, by Xebec
I’d like to understand a little more about how memory (latency) impacted x86 performance over the early years.
I think in the 8086/8088 era - RAM was relatively fast enough, and the CPU was generally slow enough at execution that DRAM was (always?) fast enough for the 8088/8086.
The 80286 era it looks like you could buy RAM fast enough to feed the 286 at 6 / 8 MHz without any latency. Once the 286 hit 12 MHz, I think DRAM at the time was starting to require some kind of latency to keep up with the CPU’s requests. is this true? Was “zero wait state ram” truly zero delay for access?
By the 386 era, I know external cache became a thing (common?), indicating there was some slowdown without cache. Could you buy RAM fast enough to feed a 16 MHz 386 without latency, around the time it launched or a few years later?
Then with the 486, the internal cache became a necessity as a the fully pipelined CPU was very hungry for RAM while running at 25-33MHz at launch.
For all situations above, how does DRAM refresh affect performance - would that cause occasional stalls for the early CPUs or would they not notice it?
And lastly, does the original 486DX-25 and 486DX-33 technically leave some performance on the table because RAM wasn’t fast enough (needing a cache) to feed the CPU?