Reply 81 of 163, by superfury
After fixing the seeking for the FDC to not error out anymore on seeking past cylinder 0(idk why that was there in the first place), now(running my test TP6.0 program for the second FDC disk), it seems to double seek on the second floppy drive instead?
Reply 82 of 163, by superfury
Hmmmm... Just trying to run Windows 3.0 (from the fake86 MS-DOS 6 hard disk image) seems to somehow go very awkward: it sets text mode instead of graphics mode, making the whole Windows booting (and running) unreadable properly? Something definitely goes wrong there completely? On a XT NEC V30 emulation nonetheless?
Reply 83 of 163, by superfury
Trying to use the fake86 MS-DOS Wolfenstein 3D on the 80286 seems to run fine now? Only the 80386+ software seems to bug out(Windows 3.0 on all CPUs(video mode problems? Also, win /r seems to crashing into an #UD?), FreeDOS crashes during the initial ... loading process (saying ".....Error?." then hangs)?
Reply 84 of 163, by superfury
Just tried to run 8088 MPH with the current 80188 core(which is essentially a 8088 with 80186 instructions added and the segment limit overflow bug added). I see it's doing something strange during the IBM vectorball part:
Can you shed some light on this, reenigne, Jepael, vladstamate?
The RETF timings have changed a bit, though, now at 166X cycles(1% divergeance), so that's probably causing the sprite part to (once again) be out of sync?
Currently running from the 360K 8088MPH floppy disk:
Edit: The Delorean car is out of sync again(figures).
Edit: The racing the CRT part seems to be more black-screenery again(way more black screens again).
Edit: Credits still crash, though.
Reply 85 of 163, by superfury
Just tried running EMM386.EXE from MS-DOS 6.22 again. It seems to crash somewhere, then crash in the VM monitor executing opcode 0xFF14? So there's a problem with the 0xFF /2 CALL opcode?
I see that the invalid value is caused by ECX being 0xFF(DS:[disp32+ECX*2]), where it's lower in all other cases? So the ECX register is the problem here? Then the question is: what causes the 0xFF value in the ECX register?
Edit: One little question: Is EMM386 supposed to run in 16-bit protected mode? Is the VM86 monitor supposed to run in 16-bit protected mode instead of 32-bit protected mode?
I see the loaded CS descriptor in the monitor being loaded with descriptor 0x00009a121400ffff which is a 16-bit CS descriptor? Is this correct for EMM386?
Edit: Hmmmm.... After adding some simple logging for leaving Virtual 8086 mode, I see the last INT21h before a bunch of text output instructions and video calls(probably the EMM386 fault handler showing it's information about the 06h crash) being AX=0659h.
Edit: Maybe that's already the fault handler of EMM386.EXE? The last call before that was AX=4B00. So MS-DOS was trying to load and execute a program(which after the device drivers itself(which load and show themselves loaded) should just be COMMAND.COM being loaded afaik?). This happens at timestamp 00:05:33:46.01056 .
The format in the log is: INT [intnr](opcode(0F:[is0Fopcode])),immb=[immediate byte(if any)],AX=[AXvalue])
Reply 86 of 163, by superfury
Reply 87 of 163, by superfury
Just ran setup.exe on the fake86 Windows 3.0 disk image. It seems Windows 3.0 was setup for the Hercules graphics card. Changing it back to CGA makes it work correctly on the CGA graphics card.
OK. Running in VGA mode seems to have unreponsive cursor keys(left/up/right/down keys) on the Compaq Deskpro 386(running in real mode using the /r switch). Otherwise, it's running without other noticable problems.
Reply 88 of 163, by superfury
Odd: Just tried running the Windows 95 setup.exe from a CD-ROM ISO file(a minimalized custom version containing only the WIN95 folder and SETUP.EXE(with simple KEY.TXT for the key to test with). Running SETUP from the root of the disk(not the WIN95 folder) seems to result in a "run-time error R6003<line break>- integer divide by 0"?
Edit: It's in a permanent HLT state(HLT with Interrupt Flag being 0) at 0000:0005.
Edit: Just tried running the SETUP.EXE from the Windows 95a harddisk(which still is very slow with video seeming to have refresh issues?). It still crashes with the #DE exception which seems to somehow crash into 0000:00XX crashing again into an #UD?
Edit: Trying to boot the Windows 95a boot disk from http://www.allbootdisks.com/download/95.html , seems to cause it to read sector 0, then sectors 19-21(twice), which is followed by an odd seek to cylinder 96, which doesn't exist on the 80-track 1.44MB disk? It then errors out with an "Disk I/O error", which is probably because of the invalid seek?
Reply 89 of 163, by superfury
Since it's seeking to an impossible high cylinder number, does that mean that there's a problem in the boot sector/IO.SYS(in this case of MS-DOS 7) loading the OS?
Edit: Yay:/ It seems to somehow be double seeking again(seeking cylinder 2 instead of cylinder 1(which is requested to be read by the program that's running from the floppy disk))! 😖
Edit: Looking at the BIOS calls, this is once again turned on at the very first CHS 0,0,1 boot sector read when booting the floppy:S (Monitoring address 0x490 to be having bit 6 set, being set to 0x61(as is in the BIOS)).
Edit: Looking at the code just before the setting of the double seek, it seems to be turning double seeking it always in two cases:
- somehow, also for 1.44MB and 2.88MB drives, due to bit 2 being set on the second drive?
- when the drive type in the CMOS is less than 3(less than 720K drive)
Reply 90 of 163, by superfury
Now looking at the MS-DOS 7.0 which is used with the Dosbox Windows 95 disk image(from the Windows 95 on PSP tutorials). It seems to constantly (at least once a second) set the video mode again when running the command prompt, resulting in one part of the heavy slowdown using it in UniPCemu?
Edit: Windows 95 setup ends up in the woods when executing from that disk image.
Reply 91 of 163, by superfury
I'm currently still trying to get the floppy disk(which keeps double seeking still for some unknown reason?) and Windows 95 setup to run(with the issues still present). Can anyone see what's wrong with the CPU emulation?
CPU emulation files:
https://bitbucket.org/superfury/unipcemu/src/ … /cpu/?at=master
cpu.c: Basic CPU core support
opcodes(0F)_80(X)86.c/opcodes_NECV30.c: The basic instructions that are added each CPU(8086, NEC V30(actually 80186), 80286, 80386, 80486(Pentium only theoretically, currently still disabled due to being unable to run at cycle accuracy, not having it's added functionality besides CPUID implemented yet)).
modrm.c: ModR/M parsing, decoding and I/O support.
protection.c: Basic protected mode support, segmentation support etc.
flags.c: Basic flag calculations for common algorithmic instructions.
biu.c: BIU emulation.
paging.c: Paging support.
timings.c: Timings and instruction parse/decode information tables(which are used for handling instruction timings and instruction fetching/decoding).
protecteddebugging.c: Protected mode debugger register handling.
cpu_execution.c: Execution phase support for the CPU to handle different tasks through the BIU(memory I/O, task switching, interrupts, basic opcode phase(which handles running opcodes)).
multitasking.c: Handles the task switching(called by cpu_execution.c's handler during the task switch execution phase).
unkop.c: #UD handlers for different CPU generations.
cb_manager.c: Simple instruction generation for ROM code to handle internal emulation calls(using port I/O) and emulation entry point(which runs the emulator's starting code, which displays the initial yellow text option and loads ROMs and other testcases when running the internal BIOS before starting the normal emulation fully(after it loads the BIOS ROMs or uses the internal BIOS ROM option(untested for a long time)).
cpu_jmptbls(0f).c: Mapping of all supported (0F) instructions for all supported CPUs(read once when starting a CPU's emulation and translated(reduced) to a more simple lookup table (optimized) for the specified CPU).
Anyone can see what's going wrong? As far as I can see, the instructions match the manuals(as far as it's documented) completely(at least as far as the opcode bytes are concerned)?
Reply 92 of 163, by superfury
Oddly enough, I've compared my instruction information data with 80386 manual appendix A over and over again, but can't find any errors anymore.
So there must be a problem with some instructions executing(the opcode handlers) somewhere? Can anyone see what's going wrong? As far as I can see(except for faults, which don't occur until the #UD exception in the Windows 95 setup) the CPU cores themselves look fine?
Then why is setup erroring out? Some rogue jump? An error in calculations(unpacking?)?
Reply 93 of 163, by superfury
Does anyone know the different phases of the Windows 95 setup program? So what it's doing and it's accompanying segment selector executing the block(or offset within the program's segment)? That way I at least know what it's trying to do?
Reply 94 of 163, by superfury
Just tried running the MS-DOS 5.0a setup from an setup floppy on the XT NECV20 configuration. When proceeding to the first installation step(configuration) and selecting the country option(and pressing enter), it seems to somehow re-execute the boot sector in a corrupted way, displaying garbage on the screen and crashing showing a not bootable message(although this can be attributed to the boot sector as well).
So there's definitely a big problem in the base 80(1)86 emulation core? But where is said problem? Any way to find out?
Reply 95 of 163, by superfury
Just found some problems concerning the XT PPI and keyboard clock line(allowing to disable and enable(also resets) the keyboard). These have been fixed now.
Fixing the XT PPI to work correctly again also fixes the parity errors that were reported by the Supersoft/Landmark diagnostics BIOS for the RAM tests.
Reply 96 of 163, by superfury
Just went and fixed a lot of simple 'bugs' and warnings issued by MinGW and Visual Studio(code analysis functionality). Now somehow UniPCemu flat out crashes when using the 1kHz Dosbox-style IPS mode?
Edit: Just tested at the default setting(setting value 0) and setting value 2kIPS(setting value 2). Both run in IPS mode without errors, but only at the 1kIPS setting, the emulation somehow crashes before it even gets to the ROM code for the emulator configuration itself?
Edit: Managed to get it a bit more accurate: only on Android, running at the 1 cycles setting(1kIPS or 1kHz doesn't matter), it causes the app to crash(before(executing required instructions normally)/at loading the Settings menu bootscript)?
Reply 97 of 163, by superfury
Just tried my 80 track seek test program(see the UniPCemu reporsitory for it's Turbo Pascal 6.0 code) against my current IBM AT emulation. Apparently, tracks 53-79 fail the test, they time out because the BIOS seems to be counting too fast(only ~1.43 seconds according to step rate byte being set to 0xDF, thus step rate 0xD(28000028ns for each track). So somehow, the IBM AT is running too fast?
Edit: Changing DMA to run at half the CPU clock speed(3MHz in the case of the default AT config) increases the range to a maximum of sector 62. So the CPU is still too fast, for some reason.
The documentation on the AT says that bus transactions(I/O using the IN and OUT instructions) take 6 cycles or 12 cycles(two byte accesses) in total. That matches documentation on I/O reads(which are 5 cycles). But the OUT instruction seems to be faster(only taking 3 cycles)? Thus resulting in 4 cycles for each transaction instead of 6?
So somehow, the speed of the DMA isn't timing correctly, or the FDC isn't timing correctly? The DMA should be running at 4MHz in the AT 8MHz configuration(exactly half the CPU clocking, running off the CPU clock itself divided by 2), so why isn't the speed correct? Why isn't it waiting the full 2 seconds required?
Reply 98 of 163, by superfury
I now changed DMA to run at the proper speed on AT machines(still 4.77MHz on Compaq Deskpro 386/XT/PS/2 architecture settings, half CPU clock speed(based on emulated CPU cycles divided by 2) on AT).
The PIT is supposed to handle the specifics of the clock used to drive the FDC delay(it programs the PIT for Request Refresh Cycle):
http://www.minuszerodegrees.net/manuals/IBM_5 … 02243_MAR84.pdf
The system has three programmable timer/counters controlled by an Intel 8254-2 timer/counter chip and defined as Channels 0 thro […]
The system has three programmable timer/counters controlled by
an Intel 8254-2 timer/counter chip and defined as Channels 0
through 2 as follows:
Channel 0 System Timer
GATE 0 Tied on
CLK IN 0 1.190 MHz OSC
CLK OUT 0 8259A IRQ 0
Channel 1 Refresh Request Generator
1-8 System Board
GATEl Tied on
CLKIN 1 1.190 MHz OSC
CLK OUT 1 Request Refresh Cycle
~ Note: Channel 1 is programmed as a rate generator to
produce a IS-microsecond period signal.
Channel 2 Tone Generation for Speaker
GATE 2 Controlled by bit 0 of port hex 61 PPI bit
CLKIN 2 1.190 MHz OSC
CLK OUT 2 Used to drive the speaker
The 8254-2 Timer/Counter is a programmable interval
timer/counter that system programs treat as an arrangement of
four external I/O ports. Three ports are treated as counters; the
fourth is a control register for mode programming. Following is a
system-timer block diagram.
So that AT still uses the PIT1 timer for it's DMA Memory Refresh it seems?
Edit: It seems the BIOS uses Interrupt 15h, function 86h for it's FDC timing purposes? So it actually uses the PIT periodic interrupt function to time it?
Edit: It seems odd that the higher(past 62 tracks) track seeks fail from track 0(after a recalibrate)? Especially since the PIT is delivering the correct rate(which should be unused by the FDC code) and the RTC should use the correct 1024Hz timer that it calibrates it's FDC seek against?
Reply 99 of 163, by superfury
Whoops, found a bug in the generation of the FDC step rate lookup table, where the rates were invalid(counting up from the 0th value, becoming bigger, instead of the correct substraction instead of addition). Thus the 26000026ns wasn't the correct value to use for the set mode:S
Edit: Having fixed those lookup tables to be calculated properly, now the AT BIOS seeks correctly on the 1.44MB disk again 😁
Although MS-DOS thinks there's only one floppy disk in the system? Selecting drive B: will cause it to ask for a disk to be inserted and press a key, then read drive A instead?
So there's still CPU problems left(Compaq double seeking the 1.44MB disks)...
Edit: The second 1.44MB disk(A drive is 1.44MB too) acts odd on the IBM AT emulation: It seeks to cylinder 48, then executes a recalibrate, but executes a sense interrupt status before it finishes the recalibration(Unlike drive A)?