First post, by superfury
When loading(jumping) to a different address, ip or eip is loaded depending on the operand size. But when the operand size is 16-bits, the high 16-bits of eip are masked to become zeroed.
So even though software might use ip in 16-bit mode, the CPU might only have EIP(with it's 'IP' part being written to memory when stored for procedure calls/interrupts when in 16-bit mode).
So any overflow in the 16-bit limit will cause EIP's low 16-bits to be written and EIP(when wrapping to 0x10000 for the next instruction) will resume at the #GP handler. When said handler returns, IP is loaded from the stack and stored within EIP as 16-bits, clearing the upper 16-bits of EIP once more. So from a software perspective, this EIP wrapping(not in the middle of an instruction) is provided for free(assuming a #GP handler returns normally)? It's just instructions that wrap IP in the middle of an instruction will hang the CPU in this way(since they return to the instruction itself, which still is before the wrap).
So thinking like this, in essence IP doesn't exist on 32-bit x86 CPU's? The lower 16-bits are stored in memory during any procedure call(int,call), but loads load the full EIP register regardless of the operand size? So IP only exists on the 80286 and earlier(although the 80286 might have 17-bit registers to facilitate the overflow exception when going past offset 0xFFFF on any memory operand(including IP))?
Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io