First post, by superfury
During a x86 stack switch and things like far call through a gate or stack return, how does the CPU know to load SP or ESP respectively? And what size to push or pop on the stack? Does it use the new B-bit or the original B-bit (of the instruction's original stack from before it started to load SS)?
Edit: Currently I have the following implemented:
- Stack switches to higher privilege: TSS size determines the 16-bit or 32-bit value for SS/ESP loaded from the TSS. Resulting stack segment descriptor's B-bit determines if to load ESP(set) or SP(cleared) with the 16(zero-extended if needed) or 32-bit value(truncated if needed).
- Stack return to lower privilege level: operand size determines if to pop a 32-bit or 16-bit operand from the stack. The resulting stack segment descriptor that's loaded afterwards determines if to load SP(cleared) or ESP(set) with that value, zero-extended(16->32) or truncated (32->16) if needed. This applies to both RETF and IRET.
- Call gate to higher privilege: Stack switch occurs as mentioned above. SS is pushed as 32-bit based on the call gate size. The call gate size also determines if 32-bit ESP or 16-bit SP is pushed on the stack (decreased by 4 or 2 accordingly to that too). Extra parameters are pushed in the same way (on the destination stack). The same is true for the return address.
Anyone knows if this behaviour is correct?
Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io