First post, by superfury
To make my emulation CPU more cycle-accurate, would it be benificial to create a 16-bit kind of bytecode/microcode blocks that's stored inside the emulator for every virtual (8086/8088) CPU instruction, which is actually the basic steps done by a CPU to execute it's instruction? Like load from memory, load from ModR/M, fetch, execute processess(the actual core that's the calculation part of the instruction, like adding two numbers together or in any other way affecting state, without accessing memory), store ModR/M, store immediate? So essentially the same as Modern RISC processing?
Would this result in better and more accurate CPU emulation than a simple function doing all above(except the ModR/M instruction reading)?
Maybe some kind of microcode the emulator executes in parallel to normal hardware and the prefetch unit? So the microsequencer executes a basic action (fetch parameter, modrm memory, modrm register byte/word/dword into (temporary) CPU register, toggle bit size(8/16/32-bit), execute basic instruction(adding numbers together etc.), store 8/16/32-bit data to memory(modr/m, direct from parameter)), after which the prefetch and other hardware update their state(prefetch fetching from memory when possible and when it has enough room to store it in it's buffer).
Then each of those 'microsequencer' instructions can replicate the timings of the 8086/8088 to implement the cycle-accurate 8086/8088 CPU? Anyone?
Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io