Hmmm. DOS isn't working either. I tried a metric ton of stuff too but doing so, I think I figured why the ITE8888 based card is not working. I've also tried testing on a PC Chips M810LR motherboard, AMD Duron 850 CPU, 1GB RAM because I wanted to see if our problem was due to ICHxx and if the problem would occur on a non-ICH system. The problem still exists. Here's where it gets interesting and please do correct me if I'm off base here.
CPU -> Southbridge -> LPC claims it -> Never reaches user PCI bus. In both ICH and non-ICH the hardware on the motherboards are capturing the calls. IT8888 can never see the cycle, even though it's configured to respond. This is exactly what (I think it was RayeR) brought up before. The fact the signal would never make it to our card because of the hierarchy of the motherboard architecture.
Solution:
Hypervisor with I/O routing (Xen?). We intercept at CPU instruction level, before chipset routing and redirect it over to the ITE8888 on PCI bus rather than letting the hardware handle it.
technical nuance:
"redirect it over to the ITE8888 on PCI bus" is slightly simplified. More precisely, the hypervisor would intercept the OUT instructions via VM exit, then somehow generate PCI I/O cycles (rather than CPU I/O instructions) that reach the IT8888 on the PCI bus. The exact mechanism for this is unclear; it might require PCI device passthrough, special hypervisor code, or discovering undocumented IT8888 capabilities. The IT8888 datasheet shows it has PCI config registers but no explicit "I/O forwarding API," so what is likely needed is experiments with how to actually send individual I/O operations through its PCI interface. "PCI device passthrough," that typically means passing through a PCI device's MMIO/config space to a VM. What I think we'd actually need is more exotic; the ability to generate PCI I/O transactions (not memory-mapped, but actual PCI I/O cycles) from the hypervisor to specific addresses. This is uncommon since most modern PCI devices use MMIO, not I/O space.
This hypervisor approach requires significant custom development work; writing hypervisor I/O handlers and potentially reverse engineering how to communicate with the IT8888 via PCI, since ITE's documentation doesn't explicitly describe a forwarding mechanism for individual I/O operations. This is where the PCI/ISA logic analyzer would be useful; I knew it would come in handy, just not in this exact way. This is genuinely complex, potentially months of work. But it's theoretically sound. To me this is really just a roadblock. There is seemingly nothing that would stop us from doing it except a large chunk of work.
Xen (or KVM) would allow intercepting I/O instructions via hardware virtualization (VT-x/AMD-V), but the harder part is generating PCI I/O cycles to the IT8888. This likely requires modifying the hypervisor's device model (QEMU in both Xen and KVM) to write custom code that can issue PCI I/O transactions, something most hypervisors don't natively support since modern devices use MMIO instead.
All this really shows the genius of the original dISApointment, which we all knew of course.
~~~
I think at this point I should go back to messing around with a far more modern system and try the PCIe-PCI bridge? Or maybe there is still hope and someone has an idea that might bypass the need for hypervisor capture?
Potential PCIe-to-PCI-to-ISA pathway repository: https://github.com/DartFrogTek/PCIe-PCI-ISA