VOGONS

Common searches


First post, by videogamer555

User metadata
Rank Member
Rank
Member

I know that the opcode prefix 0x66 tells the CPU that the following opcode is to behave an opcode for a different bitness. For example, if the CPU is in 32bit mode, the prefix 0x66 tells it to treat the following opcode as a 16bit instruction (it will use 16bit registers like AX and DX instead of EAX and EDX). Meanwhile, if the CPU is in 16bit mode, the 0x66 opcode tells the CPU to treat the following opcode as a 32bit instruction (it will use 32bit registers like EAX and EDX, instead of AX and DX).

But how to you set the mode? Yes, the 0x66 opcode prefix tells the CPU to (for a single opcode) to use the opposite bitness mode, compared to which bitness mode is currently set for the CPU, but I want to know how to set it in the first place. I'm writing an assembly language program for DOS, and I need to know which opcode is sent to the CPU to actually set it into either 16bit or 32bit mode.

Reply 2 of 25, by videogamer555

User metadata
Rank Member
Rank
Member
ripsaw8080 wrote:

I know there is real versus protected mode, but I would like to know how to use 32bits in real mode, not switch to protected mode for 32bit access. Protected mode, if I'm not mistaken, is what Windows uses, and prevents you from using "privileged instructions", which are those that directly interact with hardware. I don't want to switch DOS into protected mode (which would cost me the ability to directly interact with hardware). I just want to switch to using 32bit instructions.

Reply 3 of 25, by ripsaw8080

User metadata
Rank DOSBox Author
Rank
DOSBox Author

In addition to giving access to more memory, protected mode also switches the CPU into using 32-bit registers by default.

If you want to use 32-bit registers in real mode then the 66h prefix byte is the only way. Most assemblers will add the prefix byte and use 32-bit constants automatically if you reference a 32-bit register (e.g. EAX instead of AX), but you may have to enable 386 instructions in some way, such as the .386 directive in MASM.

Reply 4 of 25, by videogamer555

User metadata
Rank Member
Rank
Member
ripsaw8080 wrote:

In addition to giving access to more memory, protected mode also switches the CPU into using 32-bit registers by default.

If you want to use 32-bit registers in real mode then the 66h prefix byte is the only way. Most assemblers will add the prefix byte and use 32-bit constants automatically if you reference a 32-bit register (e.g. EAX instead of AX), but you may have to enable 386 instructions in some way, such as the .386 directive in MASM.

If I'm not mistaken though, doesn't protected mode block the use of "privileged instructions" such as INT (call an interrupt), IN (get data from a device with a port number), OUT (send data to a device with a port number)? I know Windows does with the concept of user mode and kernel mode. I think user mode in Windows (also known as ring-3, and is what all applications run in) is just a fancy name for protected mode, and kernel mode (also known as ring-0, and is accessible only to the OS itself and certain drivers) is just a fancy name for real mode.

Reply 5 of 25, by ripsaw8080

User metadata
Rank DOSBox Author
Rank
DOSBox Author

Of course it's not possible to call a real-mode interrupt directly from protected mode; it's necessary to first switch back to real mode.

DOS programs that use protected mode typically employ an interface that eliminates most of the hassles -- some flavor of DPMI or an extender that includes DPMI (DOS/4G, PMODE, et al.)

Reply 6 of 25, by videogamer555

User metadata
Rank Member
Rank
Member
ripsaw8080 wrote:

Of course it's not possible to call a real-mode interrupt directly from protected mode; it's necessary to first switch back to real mode.

DOS programs that use protected mode typically employ an interface that eliminates most of the hassles -- some flavor of DPMI or an extender that includes DPMI (DOS/4G, PMODE, et al.)

Isn't the command to change back to real mode itself a "privileged instruction"? Doesn't that prevent returning to real mode, until you reboot the computer?
And is there such a thing as a "protected mode interrupt"? Or are all interrupts real mode instructions?
And are the IN and OUT instructions blocked in protected mode? Which x86 instructions can't be used in protected mode?

Reply 7 of 25, by ripsaw8080

User metadata
Rank DOSBox Author
Rank
DOSBox Author

Moved this to Milliways, as the topic is only indirectly related to DOSBox.

Few people here, myself included, have the time or patience to educate you on the subject; so I suggest you read up about DPMI if you're serious about writing protected mode programs for DOS.

Reply 8 of 25, by vladstamate

User metadata
Rank Oldbie
Rank
Oldbie
videogamer555 wrote:

I know Windows does with the concept of user mode and kernel mode. I think user mode in Windows (also known as ring-3, and is what all applications run in) is just a fancy name for protected mode, and kernel mode (also known as ring-0, and is accessible only to the OS itself and certain drivers) is just a fancy name for real mode.

No that is not true. Protected mode is a mode that the CPU operates in and it is orthogonal to what we think of Kernel and User modes. The kernel is also running in protected mode, it is just that the privilege bit is set to 0 (ring-0). It also cannot run real mode interrupts (like BIOS). For some piece of code to be considered by us "kernel mode" it all means that piece of code gets executed from a segment whose descriptor has privilege bits set to 0. That is all.

As for 32vs16 bit in protected mode, you can set the default mode (which then later can be overwritten by either then 0x66 prefix - for registers - or 0x67 - for address) by changing the Sz bit in the flags of the segment descriptor entry. 1 means 32bit, 0 means 16 bit default.

There is no protected mode = 32bit by default. That is controlled by the segment descriptor.

YouTube channel: https://www.youtube.com/channel/UC7HbC_nq8t1S9l7qGYL0mTA
Collection: http://www.digiloguemuseum.com/index.html
Emulator: https://sites.google.com/site/capex86/
Raytracer: https://sites.google.com/site/opaqueraytracer/

Reply 9 of 25, by vladstamate

User metadata
Rank Oldbie
Rank
Oldbie
videogamer555 wrote:

If I'm not mistaken though, doesn't protected mode block the use of "privileged instructions" such as INT (call an interrupt), IN (get data from a device with a port number), OUT (send data to a device with a port number)?

No it does not. There are only a handful of instructions that PM prevents you from calling unless you are in PM (like ARPL instruction for example).

As for IN/OUT access those are software controlled via the FLAGS register (IOPL field). The CPU checks the value in the IOPL field and compares that against your task CPL field. In layman terms it means the FLAGS register can say what ring is allowed to use IN/OUT instructions.

YouTube channel: https://www.youtube.com/channel/UC7HbC_nq8t1S9l7qGYL0mTA
Collection: http://www.digiloguemuseum.com/index.html
Emulator: https://sites.google.com/site/capex86/
Raytracer: https://sites.google.com/site/opaqueraytracer/

Reply 10 of 25, by vladstamate

User metadata
Rank Oldbie
Rank
Oldbie
videogamer555 wrote:

I know that the opcode prefix 0x66 tells the CPU that the following opcode is to behave an opcode for a different bitness. For example, if the CPU is in 32bit mode, the prefix 0x66 tells it to treat the following opcode as a 16bit instruction (it will use 16bit registers like AX and DX instead of EAX and EDX). Meanwhile, if the CPU is in 16bit mode, the 0x66 opcode tells the CPU to treat the following opcode as a 32bit instruction (it will use 32bit registers like EAX and EDX, instead of AX and DX).

But how to you set the mode? Yes, the 0x66 opcode prefix tells the CPU to (for a single opcode) to use the opposite bitness mode, compared to which bitness mode is currently set for the CPU, but I want to know how to set it in the first place. I'm writing an assembly language program for DOS, and I need to know which opcode is sent to the CPU to actually set it into either 16bit or 32bit mode.

To answer your original question if you do not want to mess with Protected Mode then 0x66/0x67 are your only way to get access to 32bit registers and 32bit addressing mode. There is no instruction that switches 32bit in real mode. Assemblers should be able to generate the prefix if you writes instructions like this:

MOV EAX, 0x5

YouTube channel: https://www.youtube.com/channel/UC7HbC_nq8t1S9l7qGYL0mTA
Collection: http://www.digiloguemuseum.com/index.html
Emulator: https://sites.google.com/site/capex86/
Raytracer: https://sites.google.com/site/opaqueraytracer/

Reply 11 of 25, by Jorpho

User metadata
Rank l33t++
Rank
l33t++
videogamer555 wrote:

Isn't the command to change back to real mode itself a "privileged instruction"? Doesn't that prevent returning to real mode, until you reboot the computer?

"Triple faulting" was sometimes used to switch back to real mode.
https://en.wikipedia.org/wiki/Triple_fault

It has come up in numerous posts at http://www.os2museum.com , like http://www.os2museum.com/wp/ms-os2-patents/ and http://www.os2museum.com/wp/why-os2-is-hard-to-virtualize/ ; as noted, this is not typically supported by virtual machines.

Reply 12 of 25, by videogamer555

User metadata
Rank Member
Rank
Member
vladstamate wrote:
No that is not true. Protected mode is a mode that the CPU operates in and it is orthogonal to what we think of Kernel and User […]
Show full quote
videogamer555 wrote:

I know Windows does with the concept of user mode and kernel mode. I think user mode in Windows (also known as ring-3, and is what all applications run in) is just a fancy name for protected mode, and kernel mode (also known as ring-0, and is accessible only to the OS itself and certain drivers) is just a fancy name for real mode.

No that is not true. Protected mode is a mode that the CPU operates in and it is orthogonal to what we think of Kernel and User modes. The kernel is also running in protected mode, it is just that the privilege bit is set to 0 (ring-0). It also cannot run real mode interrupts (like BIOS). For some piece of code to be considered by us "kernel mode" it all means that piece of code gets executed from a segment whose descriptor has privilege bits set to 0. That is all.

As for 32vs16 bit in protected mode, you can set the default mode (which then later can be overwritten by either then 0x66 prefix - for registers - or 0x67 - for address) by changing the Sz bit in the flags of the segment descriptor entry. 1 means 32bit, 0 means 16 bit default.

There is no protected mode = 32bit by default. That is controlled by the segment descriptor.

So if even the kernel (ring-0) is prevented from using interrupts (the INT opcode) when in protected mode, how does it communicate with any I/O devices? For that matter, how does do something so simple as setting the color of a pixel on the screen? Doesn't that require access to interrupt 10h, or getting the keystrokes when you type on your keyboard?

Reply 13 of 25, by vladstamate

User metadata
Rank Oldbie
Rank
Oldbie

You do not need INT 10h to put pixels on the screen. If the OS switches to PM then they will need to provide the equivalent of all those interrupts. Setting up a good IDT is required as part of switching to PM. Also think about the IRQs. You would need to have interrupts set up for those as well.

The thing to remember though is that you CAN use INT 10h (or other interrupts) even in protected mode. However you run the risk of them messing up with stuff because they are designed to work in real-mode. So if INT 10h is trying to write to B800:0 you better hope B800 selector has a proper base that points to the CGA memory. And so on.

YouTube channel: https://www.youtube.com/channel/UC7HbC_nq8t1S9l7qGYL0mTA
Collection: http://www.digiloguemuseum.com/index.html
Emulator: https://sites.google.com/site/capex86/
Raytracer: https://sites.google.com/site/opaqueraytracer/

Reply 14 of 25, by Azarien

User metadata
Rank Oldbie
Rank
Oldbie

But how to you set the mode?

Protected vs Real Mode is set by a specific bit in CR0 control register, then by a far jump to the new code.
16-bit protected mode vs 32-bit protected mode is set by a specific bit in segment descriptor. You switch the "bitness" by doing far jump (that is, with selector:offset) to a segment with desired bitness set.
Segment descriptors are memory structures containing information about segments' properties (location, size, code vs data, privilege etc.). You set them with LGDT and LLDT instructions.

If I'm not mistaken though, doesn't protected mode block the use of "privileged instructions"

Not by default, but it has the *ability* to block certain instructions. The "ring" of a certain segment is set in its descriptor. Ring 0 has access to all instructions and is intended for kernel code. Ring 3 is limited, intended for user code. Rings 1 and 2 exist but are rarely used.

I think user mode in Windows (also known as ring-3, and is what all applications run in) is just a fancy name for protected mode, and kernel mode (also known as ring-0, and is accessible only to the OS itself and certain drivers) is just a fancy name for real mode.

No, both are aspects of protected mode. There are no "rings" in real mode.

Reply 15 of 25, by videogamer555

User metadata
Rank Member
Rank
Member
vladstamate wrote:

You do not need INT 10h to put pixels on the screen. If the OS switches to PM then they will need to provide the equivalent of all those interrupts. Setting up a good IDT is required as part of switching to PM. Also think about the IRQs. You would need to have interrupts set up for those as well.

The thing to remember though is that you CAN use INT 10h (or other interrupts) even in protected mode. However you run the risk of them messing up with stuff because they are designed to work in real-mode. So if INT 10h is trying to write to B800:0 you better hope B800 selector has a proper base that points to the CGA memory. And so on.

So does that mean that I have to manually write 256 pieces of code (one for each interrupt, as any one could be called at any time, without warning, by the underlying system), and make sure they behave in exactly the same way as the original real-mode versions of the interupts? What happens if I just disable all interrupts by using the assembly language opcode CLI? Doesn't that prevent any interrupts from ever being used?

Reply 16 of 25, by vladstamate

User metadata
Rank Oldbie
Rank
Oldbie
videogamer555 wrote:

So does that mean that I have to manually write 256 pieces of code (one for each interrupt, as any one could be called at any time, without warning, by the underlying system), and make sure they behave in exactly the same way as the original real-mode versions of the interupts?

For a complete system yes you do. You will need to provide things like INT 13h, INT 10h, etc. Or alternatively your OS will have to provide function calls to do what those interrupts used to do (in Linux world you have IOCTL).

videogamer555 wrote:

What happens if I just disable all interrupts by using the assembly language opcode CLI? Doesn't that prevent any interrupts from ever being used?

CLI only disabled IRQs. It does not disable INT xx instruction.

I think we are all missing the big picture here. What are you trying to do? Because depending on what is your goal how you set up a system before you enter PM is wildly different.

YouTube channel: https://www.youtube.com/channel/UC7HbC_nq8t1S9l7qGYL0mTA
Collection: http://www.digiloguemuseum.com/index.html
Emulator: https://sites.google.com/site/capex86/
Raytracer: https://sites.google.com/site/opaqueraytracer/

Reply 17 of 25, by ripsaw8080

User metadata
Rank DOSBox Author
Rank
DOSBox Author
vladstamate wrote:
videogamer555 wrote:

So does that mean that I have to manually write 256 pieces of code (one for each interrupt, as any one could be called at any time, without warning, by the underlying system), and make sure they behave in exactly the same way as the original real-mode versions of the interupts?

For a complete system yes you do. You will need to provide things like INT 13h, INT 10h, etc. Or alternatively your OS will have to provide function calls to do what those interrupts used to do (in Linux world you have IOCTL).

You only have to temporarily switch back to real mode to run real-mode interrupts. And you don't have to reinvent all the mode switching machinations if you just use a DPMI host/server.

Read the section "Interrupts in protected mode": http://www.delorie.com/djgpp/doc/ug/interrupt … thandlers2.html

Reply 18 of 25, by Jorpho

User metadata
Rank l33t++
Rank
l33t++
videogamer555 wrote:

So does that mean that I have to manually write 256 pieces of code (one for each interrupt, as any one could be called at any time, without warning, by the underlying system), and make sure they behave in exactly the same way as the original real-mode versions of the interupts? What happens if I just disable all interrupts by using the assembly language opcode CLI? Doesn't that prevent any interrupts from ever being used?

There is a blurb about this in one of the posts I linked to above.

Letwin’s real contribution, and the bulk of patent 4825358, was a way to make real and protected-mode software coexist and run with minimum performance penalty, and without consuming large amounts of memory. The key to achieve that is dual-mode or bi-modal code and data. Bi-modal code is program code which can be executed in either real or protected mode. This is accomplished by creating protected memory mappings for both program code and data such that a protected-mode selector:offset address refers to the same memory location as a real-mode segment:offset address.

Bi-modal code is especially useful for interrupt handling, where mode switching can be prohibitively expensive; there is a good reason why bi-modal interrupt handlers were also used with some DOS extenders. OS/2 1.x used Letwin’s patent and bi-modal code extensively, and therefore achieved decent performance in its DOS box. Many OS/2 1.x device drivers were dual-mode and hence a nightmare to write.

Reply 19 of 25, by videogamer555

User metadata
Rank Member
Rank
Member
vladstamate wrote:
For a complete system yes you do. You will need to provide things like INT 13h, INT 10h, etc. Or alternatively your OS will have […]
Show full quote
videogamer555 wrote:

So does that mean that I have to manually write 256 pieces of code (one for each interrupt, as any one could be called at any time, without warning, by the underlying system), and make sure they behave in exactly the same way as the original real-mode versions of the interupts?

For a complete system yes you do. You will need to provide things like INT 13h, INT 10h, etc. Or alternatively your OS will have to provide function calls to do what those interrupts used to do (in Linux world you have IOCTL).

videogamer555 wrote:

What happens if I just disable all interrupts by using the assembly language opcode CLI? Doesn't that prevent any interrupts from ever being used?

CLI only disabled IRQs. It does not disable INT xx instruction.

I think we are all missing the big picture here. What are you trying to do? Because depending on what is your goal how you set up a system before you enter PM is wildly different.

Is there a way to prevent hardware interrupts from happening (such as preventing the CPU from responding to interrupts generated by key presses on the keyboard)? And instead put the CPU in charge of explicitly polling the keyboard (via the use of IN and OUT instructions) at points in the program where keyboard input is needed? That way I could use the IN and OUT instructions in my software directly (not depending on the underlying DOS OS do to it), and therefore I can avoid the use of INT calls, as well as avoid hardware interrupts being fired by the connected keyboard (and other input devices). Therefore I can prevent interrupts from happening that could kick the program's execution out of of its valid 32bit segment (which would otherwise crash the system). Is it possible to do this, completely disable the use of all interrupts, and instead depend on the use of IN and OUT, directly from the program itself?

As for what I plan to do. My plan is simple:
Create a COM file, who's first action when run from DOS is to switch into 32bit protected mode.
Then have it run 32bit code.
It does not have any requirement of being able to switch back into real mode (which should significantly cut down on program complexity), as I intend that program then to continue to run until the computer is powered off.
Because it doesn't have any need to exit to DOS, there's no need to keep in mind that the COM file is running within DOS. It can reuse any memory that DOS previously used for its code (it can overwrite DOS code, so I don't need to be very careful about keeping DOS in tact, which should make it easier to write the COM file without restrictions).
I don't need it to use interrupts of any kind (hardware or software), because I intend to use IN and OUT to directly communicate with any other hardware (keyboard, mouse, etc).
So completely disabling all interrupts (even more thoroughly than can be done with the CLI command) would be beneficial (don't want an interrupt to kick me out of 32bit protected mode, which could crash the system), and don't want to have to bother to write 32bit interrupt code.