rasz_pl wrote on 2023-08-02, 07:40:
wouldnt you solve this by dropping a bus arbitrator between system data bus and ram?
A bus arbitrator would not help. The role of a bus arbitrator is to decide who is entitled to use the bus. The bus arbitrator relies on the devices to not drive data to the bus when they are told no not drive data to the bus. Currently, we are discussing EDO RAM in a system built for FPM RAM. EDO RAM drives the bus when the chipset expects the RAM to not drive the bus. So that's not an issue an arbitrator can solve.
On the other hand, you might be thinking about a buffer chip, especially as you mention the Intel Data Path Units in you next sentences, which are not arbitrators, but contain a buffering function. A buffer does indeed help, if one side of the buffer is connected to the bus, and the other side of the buffer is connected to a single device that drives its data lines even when the device must not drive the system bus. You do not insert a buffer chip without need, though, because the buffer chip also adds some latency to the signal. In the case of RAM, a buffer chip wouldn't solve the FPM/EDO problem completely, because you would have all SIMM banks connected to one side of the memory data buffer, and the system bus to the other side of the memory data buffer. While the SIMMs can no longer disturb the system buf with the buffer in-between, a good FPM chipset can keeps a page open in the first bank while it accesses another page in the second bank. Stuff like this is performed on later chipsets like the 440BX (even if they use SDRAM, the idea of having open and closed pages is the same in FPM, EDO and SDRAM), but I don't know whether any 486 chipset supports open pages in multiple banks at the same time. In case multiple banks have open pages at the same time, though, the memory banks conflict with each other on their side of the bus.
rasz_pl wrote on 2023-08-02, 07:40:
looking at INTEL MODEL 430VX.pdf thats what two 82438VX are for, sadly datasheet cuts out before 82438VX https://0x04.net/~mwk/doc/intel/29765303.pdf Those two chips on VX boards look to me as a beefier versions of NeoGeo https://wiki.neogeodev.org/index.php?title=NEO-BUF (https://www.youtube.com/watch?v=UnSGLKsPmTg shows struggle to replace blown ones)
Indeed, the NEO-BUF is also a buffer, not an arbitrator. As explained above, a buffer can work in some situations, but wouldn't fully solve it. The Intel DPU ("data path unit") is more than just a buffer. In case you can't find a data sheet for the 430VX DPU, go looking for a datasheet for the 430FX DPU, too. Those seem to be very similar. It's interesting that you have these dedicated DPU chips on the 430FX and the 430VX, but not on the 430HX, which was published between the 430FX and the 430VX. The reason there are dedicated buffer chips (and even two of them!) on the 430FX, but not on the 430HX is that the 430FX only uses chips in PQFP cases, and there is no standard PQFP case with more than 208 pins. To get good performance with write buffering and the possiblity of having the Pentium access the L2 cache while a PCI bus master accesses RAM calls for a complex buffer chip that has dedicated data pins for the RAM, the PCI bus and the Processor/L2. In the Intel 430 architecture, the Processor/L2 interface is 64 bits wide, the RAM interface is 64 bits wide and the PCI interface is 32 bits wides. This mean 160 pins just for the data lines, not talking about any address lines yet (the PCI bus doesn't need any extra address lines, as address/data is multiplexed on the same pins on the PCI bus). Obviously, you won't fit the RAM controller and PCI host bridge into 208 pins, as you will need 32 processor address pins and around 12 RAM address pins, yielding 204 data + address pins, and no clock pins and control pins are considered yet. That's why Intel split out the data handling from the "north bridge" into dedicated data buffer chips, the "data path units". They also contain a couple of FIFOs for write posting.
The design of the 82438FX data path units is interesting. Intel split the busses in half: One chip is going to handle the bytes at odd addresses, and the other chip is going to handle the bytes at even addresses. This part is quite straightforward. You would expect the chips to have 16 data lines for the PCI bus in this case - but they don't. They just have 8 data lines per chip! The idea is that the multiplexed PCI address/data lines are driven by the 82437FX, called the TSC. As the TSC handles the address bits, it needs to be connected to the PCI data/address pins anyway. The 82437FX needs assistance from the DPUs that contain the write buffers / FIFOs. To save on pin count, the PCI bus data is transferred at 16 bit width and 66MHz clock between the 82438 and the 82437, and converted to 32 bit at 33MHz inside the 82437.
The reason for Intel to "fall back" to dedicated DPU chips in the VX chipset after they already did away with them in the HX chipset is most likely that the HX chipset had the north bridge as a single BGA chip, which means a lot of signals in a very small PCB area, requiring a expensive PCB with many layers to route all these signals. As the VX is meant to be the "value edition" (aka cheaper chips allowing cheaper computers) of the HX, it makes sense to build it in a form factor that allows cheaper PCBs, i.e. multiple PQFP chips that can be distributed over the board instead of a single BGA chip.