x86 software vs hardware clocks

Emulation of old PCs, PC hardware, or PC peripherals.

x86 software vs hardware clocks

Postby superfury » 2017-11-25 @ 23:29

Currently UniPCemu uses double percision floating point numbers to handle most hardware timings(and sound sample generation). That does take up a lot of clock cycles on a real CPU, because floating point numbers are harder to process than simple integer values.

How much of the IBM PC gaming MS-DOS hardware(Up to Compaq Deskpro 386) actually runs off the 14MHz clock? What about requirements of audio sampling? Are they always multiples of them too?
superfury
l33t
 
Posts: 2267
Joined: 2014-3-08 @ 11:25
Location: Netherlands

Re: x86 software vs hardware clocks

Postby Jepael » 2017-11-26 @ 00:31

I would not call it 14MHz clock, it would imply it is a 14.000MHz clock which is isn't.
At least it should be called 14.318MHz clock. Good enough approximation for most use is 14.31818 MHz or 14318180 Hz.
Actual definition for the clock rate comes from NTSC standards and is exactly 315/22 MHz.

On a 4.77MHz PC, these run on x=14.318MHz clock:
-CPU itself (x/3) and thus the ISA bus
-PIT (timer, speaker, memory refresh) (x/12)
-FM synthesis (chip=x/4, samplerate=x/288)
-CMS/GameBlaster (chip=x/2, I recall frequency granularity being to 1 count per chip clock too so samplerate=x/2 too)
-CGA cards use it for pixel clock
-EGA cards use it for pixel clock in CGA modes

These don't:
-SB (SB1/SB2/SBPRO 1MHz granularity)
-Covox (no clock, but in theory as it sits on ISA bus the sample can update at ISA bus cycle granularity)
-Disney Sound System (7kHz +/- 5%)
-EGA cards in EGA mode (16.257MHz)
-VGA cards (25.175MHz, 28.322MHz)
-MDA cards (16.257MHz)
-Hercules cards (16.257MHz or 16.000MHz)

Any other hardware you have in mind?
Jepael
Oldbie
 
Posts: 1195
Joined: 2005-6-15 @ 19:28
Location: Finland

Re: x86 software vs hardware clocks

Postby superfury » 2017-11-26 @ 01:27

I've just adjusted UniPCemu's hardware clocking(all but video card cores) to tick at 14.31818MHz pace. That will make them run faster(in realtime speed) when running CPU speeds past 8088 4.77MHz equivalent. Although stuff like Sound Blaster is a bit less accurate(1MHz clock not directly used, rounded to 14MHz chunks).

Will that be OK when running applications/rendering(which also runs off that rate)?
superfury
l33t
 
Posts: 2267
Joined: 2014-3-08 @ 11:25
Location: Netherlands

Re: x86 software vs hardware clocks

Postby reenigne » 2017-11-26 @ 09:15

There's no need to compromise on any of this stuff. You can have as many different frequency sources as you like, have them all be perfectly accurate, and do all your computations with integers. Suppose you want to have both 14.318MHz and 1MHz clocks. You need to find the least common multiple of those frequencies, which is 315MHz (divide by 22 to get the 14.318MHz clock and by 315 to get the 1MHz clock). So you can just do all your measurements in units of this fast clock - if you want to count N cycles of the 14.318MHz clock, increase your (integer) time variable by N*22 and if you want to count M cycles of the 1MHz clock, increase your time variable by M*315. The only tricky bit is that you have to be careful to avoid your time variable overflowing, which just means that you have to emulate in sufficiently small chunks that no component ever gets too far behind any other. I think in practice you won't even have to resort to 64-bit integers - 32 bits should be fine.
User avatar
reenigne
Member
 
Posts: 422
Joined: 2006-11-30 @ 05:13
Location: Cornwall, UK

Re: x86 software vs hardware clocks

Postby superfury » 2017-11-26 @ 17:49

But the problem is that ticking a clock of 315MHz will pretty much hang every usual CPU(at least 4.0GHz i7 CPUs)? There's also all strange stuff used by various sound cards and graphics cards(MDA,CGA,EGA,VGA,ET3000,ET4000), whic vary from 14.31818MHz all the way up to about 60-70MHz(ET4000)?
superfury
l33t
 
Posts: 2267
Joined: 2014-3-08 @ 11:25
Location: Netherlands

Re: x86 software vs hardware clocks

Postby reenigne » 2017-11-26 @ 19:02

superfury wrote:But the problem is that ticking a clock of 315MHz will pretty much hang every usual CPU(at least 4.0GHz i7 CPUs)?


Why would it? There isn't anything in the emulator that is actually run 315 million times per second - it's just an accounting mechanism. You'd always be increasing a time variable by a multiple of 22 or 315, never by just 1.

There's also all strange stuff used by various sound cards and graphics cards(MDA,CGA,EGA,VGA,ET3000,ET4000), whic vary from 14.31818MHz all the way up to about 60-70MHz(ET4000)?


Try putting the frequencies of all the crystals in question into the LCM algorithm and see what base frequency you get. Pick a value for your "how far apart components can be in time" parameter (say 20ms), and see how many cycles of that base clock you get in that time to see how big the clock numbers need to get.
User avatar
reenigne
Member
 
Posts: 422
Joined: 2006-11-30 @ 05:13
Location: Cornwall, UK

Re: x86 software vs hardware clocks

Postby superfury » 2017-11-26 @ 21:59

There's also the problem of infinitely long numbers, like the (S)VGA frequencies? The same with MDA frequencies?
superfury
l33t
 
Posts: 2267
Joined: 2014-3-08 @ 11:25
Location: Netherlands

Re: x86 software vs hardware clocks

Postby reenigne » 2017-11-26 @ 22:13

superfury wrote:There's also the problem of infinitely long numbers, like the (S)VGA frequencies? The same with MDA frequencies?


What do you mean by infinitely long numbers? I guarantee that no SVGA or MDA card has a crystal whose nominal value is an irrational number of Hertz. If you mean infinitely long as a decimal expansion (like 14.31818181...MHz) then these are just rational numbers which the LCM method copes with just fine. In the case where you don't know the nominal value or have a range of "in spec" frequencies, just pick the midpoint or a nearby (in spec) "nice looking" number.
User avatar
reenigne
Member
 
Posts: 422
Joined: 2006-11-30 @ 05:13
Location: Cornwall, UK

Re: x86 software vs hardware clocks

Postby Jepael » 2017-11-26 @ 22:15

superfury wrote:There's also the problem of infinitely long numbers, like the (S)VGA frequencies? The same with MDA frequencies?


I don't follow how are they infinitely long?

Most later cards had a PLL anyway to generate target pixel clock with multipliers and divisors from commonly the 14.318MHz crystal onboard the card.
Jepael
Oldbie
 
Posts: 1195
Joined: 2005-6-15 @ 19:28
Location: Finland

Re: x86 software vs hardware clocks

Postby superfury » 2017-11-27 @ 00:48

Then what about the VGA 25/28MHz clock? And the ET3K/ET4K clocks?

Btw, anyone knows the exact EGA and MDA clocks? As well as the ET3/4K clocks? All I can find are rounded numbers, not exact values(e.g. like the VGA clocks already present in UniPCemu(25.2*1.001 and 28.35*1.001)?)

EGA currently runs on 14.31818 and 16MHz exactly. Is that correct?
superfury
l33t
 
Posts: 2267
Joined: 2014-3-08 @ 11:25
Location: Netherlands

Re: x86 software vs hardware clocks

Postby reenigne » 2017-11-27 @ 07:56

superfury wrote:Then what about the VGA 25/28MHz clock?


The VGA clocks are based on having a frame rate of exactly 60Hz/1.001 (i.e. exactly twice NTSC) when using 480-line modes. These have 800 or 900 total pixels horizontally by 525 total lines vertically. So the nominal values are exactly 25.2MHz/1.001 and 28.35MHz/1.001, or 160/91 and 180/91 of the 14.318MHz PC/CGA clock.

superfury wrote:And the ET3K/ET4K clocks?


I don't know about those.

superfury wrote:Btw, anyone knows the exact EGA and MDA clocks?

EGA currently runs on 14.31818 and 16MHz exactly. Is that correct?


Both MDA and EGA's 350-line modes use 16.257MHz I understand. I haven't been able to find a derivation of that frequency, so I'd probably just use 16257MHz/1000 for now. It's probably accurate to +/-50ppm so maybe you'll be able to find a nicer ratio within about 800Hz of that value.
User avatar
reenigne
Member
 
Posts: 422
Joined: 2006-11-30 @ 05:13
Location: Cornwall, UK

Re: x86 software vs hardware clocks

Postby Jepael » 2017-11-27 @ 08:24

superfury wrote:Then what about the VGA 25/28MHz clock? And the ET3K/ET4K clocks?

Btw, anyone knows the exact EGA and MDA clocks? As well as the ET3/4K clocks? All I can find are rounded numbers, not exact values(e.g. like the VGA clocks already present in UniPCemu(25.2*1.001 and 28.35*1.001)?)

EGA currently runs on 14.31818 and 16MHz exactly. Is that correct?


I think I posted the exact clocks already. How they are rounded in your opinion?

VGA: 25.175000 MHz, 28.322000 MHz
EGA: 16.257000 MHz, and 14.318181.... MHz coming from motherboard.
MDA: 16.257000 MHz.

ET4k uses whatever you choose to feed to it. I bet they either use standard crystals or a PLL chip that generates these from a 14.318... MHz crystal, but then they might be approximations that are dependent on the PLL chip, as 11077 to 6300 starts to be a too large ratio maybe.

Note that real crystals have a tolerance that may be about 50ppm, they hardly are tuned in consumer equipment.

Using exact line rates results to error no more than 12ppm when using 25.175000 and 28.322000 MHz when compared to "ideal".
Also, I've never found evidence that VGA clocks would be specifically tuned to NTSC based fractions, they were just standard crystals.

Edit: Forgot to mention, that also modern video standards do specify the pixel clock as to be exactly 25.175 MHz
Jepael
Oldbie
 
Posts: 1195
Joined: 2005-6-15 @ 19:28
Location: Finland

Re: x86 software vs hardware clocks

Postby reenigne » 2017-11-27 @ 10:33

Jepael wrote:Also, I've never found evidence that VGA clocks would be specifically tuned to NTSC based fractions, they were just standard crystals.


The alternative (that VGA pixel clocks are unrelated to NTSC frequencies) implies that the VGA 480p signal being exactly 525 lines (exactly double standard NTSC) at a frame rate of 59.94Hz (double standard NTSC) is a coincidence, which seems far fetched to me.

That doesn't mean that the VGA frequencies were ever accurate to NTSC standard (+/-10ppm) - it may be that after the nominal frequency was chosen, IBM designers decided on crystals that were 25.175MHz and 28.322MHz +/- 30ppm or 50ppm as being "close enough". And it seems quite possible to me that (if those were standard crystal values before the advent of VGA) then they were standard values because of their relationship to NTSC frequencies.

Jepael wrote:Edit: Forgot to mention, that also modern video standards do specify the pixel clock as to be exactly 25.175 MHz


...within a certain tolerance (because when you're building hardware getting a frequency to be exact is impossible). And both 25175MHz/1000 and 25200MHz/1001 lie within that tolerance.

Btw, this isn't completely theoretical - I have used an IBM VGA card to generate composite NTSC signals, with the color carrier period being 4*(90/91) pixels in a 256 colour mode. It worked great - there was no noticeable hue drift between the left and right sides of the screen. The VGA's CRTC (at least the original IBM one) has a bit for setting the DRAM refresh bandwidth in order to support generating video with a 15.75KHz line rate. So compatibility with NTSC monitors was something IBM's engineers had thought about, even if it didn't end up being something that was supported by the BIOS.
User avatar
reenigne
Member
 
Posts: 422
Joined: 2006-11-30 @ 05:13
Location: Cornwall, UK

Re: x86 software vs hardware clocks

Postby Jepael » 2017-11-27 @ 11:27

reenigne wrote:
Jepael wrote:Also, I've never found evidence that VGA clocks would be specifically tuned to NTSC based fractions, they were just standard crystals.


The alternative (that VGA pixel clocks are unrelated to NTSC frequencies) implies that the VGA 480p signal being exactly 525 lines (exactly double standard NTSC) at a frame rate of 59.94Hz (double standard NTSC) is a coincidence, which seems far fetched to me.

That doesn't mean that the VGA frequencies were ever accurate to NTSC standard (+/-10ppm) - it may be that after the nominal frequency was chosen, IBM designers decided on crystals that were 25.175MHz and 28.322MHz +/- 30ppm or 50ppm as being "close enough". And it seems quite possible to me that (if those were standard crystal values before the advent of VGA) then they were standard values because of their relationship to NTSC frequencies.


I did not mean to say they are completely unrelated to NTSC frequencies - it is common knowledge that they are designed to be (very close to) double the NTSC scan rate so scanline converters can be used to connect to NTSC television sets and that they are close enough. What I meant is that the clock is just not a perfect ratio to NTSC frequency but only a very close approximation, while the 13.5MHz used for SD video that is an exact multiple that matches both NTSC and PAL signals. Televisions will have no trouble of syncing to this.

reenigne wrote:
Jepael wrote:Edit: Forgot to mention, that also modern video standards do specify the pixel clock as to be exactly 25.175 MHz


...within a certain tolerance (because when you're building hardware getting a frequency to be exact is impossible). And both 25175MHz/1000 and 25200MHz/1001 lie within that tolerance.


I should know - the tolerance of real world 25.175 MHz crystal falls well within absolute NTSC frequency, however broadcast NTSC standard was pretty strict, below 3ppm, out of range for normal consumer equipment.

I did find this though, just for reference to superfury so he does not do anything overly complex regarding these clocks:
http://www.minuszerodegrees.net/video/IBM%20PS2%20Display%20Adapter.jpg - genuine IBM card, note the crystal frequency specified on this.

But, also found this a little disturbing:
http://www.yjfy.com/images/oldhard/video/IBM_VGA.jpg - again see the 28MHz crystal!
Jepael
Oldbie
 
Posts: 1195
Joined: 2005-6-15 @ 19:28
Location: Finland

Re: x86 software vs hardware clocks

Postby superfury » 2017-11-27 @ 12:17

Just looked at it(rotating the images 180 degrees to read the crystal values easier). The 25MHz crystals both say 25.175MHz. The 28MHz crystal seems odd indeed: VGA=28.3210MHz, PS/2=28.3220MHz(So it's about 0.0010MHz off? So 1000PPM off? If it's limit is 30 or 50PPM, that's not supposed to work with a NTSC monitor? It's way off, even by their own standards?
superfury
l33t
 
Posts: 2267
Joined: 2014-3-08 @ 11:25
Location: Netherlands

Re: x86 software vs hardware clocks

Postby Jepael » 2017-11-27 @ 12:44

superfury wrote:Just looked at it(rotating the images 180 degrees to read the crystal values easier). The 25MHz crystals both say 25.175MHz. The 28MHz crystal seems odd indeed: VGA=28.3210MHz, PS/2=28.3220MHz


Note that it reads 25.175000 and 28.322000, which do indicate it's designed for that (within tolerance) and not to 25.2/1.001 for example.

superfury wrote:(So it's about 0.0010MHz off? So 1000PPM off?


No, how did you come up with that? It's 35ppm.

superfury wrote:If it's limit is 30 or 50PPM, that's not supposed to work with a NTSC monitor? It's way off, even by their own standards?


What makes you think so? TVs and monitors can lock on to the incoming signal that's way off that. Even digital TVs today must allow 0.5% tolerance, that's 5000ppm.

Edit: Having said that, video cards that have a single 14.31818... MHz crystal with PLL to generate the required clocks, won't be exactly that. A Tseng card with certain PLL can only generate clock that is 25.255681.. MHz, that's 0.32% error and I bet nobody can tell the difference, it will work just fine. It uses 14.31818 MHz * 127 / (18*4) to do that.
Jepael
Oldbie
 
Posts: 1195
Joined: 2005-6-15 @ 19:28
Location: Finland

Re: x86 software vs hardware clocks

Postby superfury » 2017-11-27 @ 13:13

Jepael wrote:
superfury wrote:Just looked at it(rotating the images 180 degrees to read the crystal values easier). The 25MHz crystals both say 25.175MHz. The 28MHz crystal seems odd indeed: VGA=28.3210MHz, PS/2=28.3220MHz


Note that it reads 25.175000 and 28.322000, which do indicate it's designed for that (within tolerance) and not to 25.2/1.001 for example.

superfury wrote:(So it's about 0.0010MHz off? So 1000PPM off?


No, how did you come up with that? It's 35ppm.

superfury wrote:If it's limit is 30 or 50PPM, that's not supposed to work with a NTSC monitor? It's way off, even by their own standards?


What makes you think so? TVs and monitors can lock on to the incoming signal that's way off that. Even digital TVs today must allow 0.5% tolerance, that's 5000ppm.

Edit: Having said that, video cards that have a single 14.31818... MHz crystal with PLL to generate the required clocks, won't be exactly that. A Tseng card with certain PLL can only generate clock that is 25.255681.. MHz, that's 0.32% error and I bet nobody can tell the difference, it will work just fine. It uses 14.31818 MHz * 127 / (18*4) to do that.


I've just adjusted all UniPCemu's timings according to previous posts, while adding special support for the 14MHz clock on the motherboard itself to be used directly. All other modes use a normal double floating point-based clock the old way.
superfury
l33t
 
Posts: 2267
Joined: 2014-3-08 @ 11:25
Location: Netherlands

Re: x86 software vs hardware clocks

Postby BloodyCactus » 2017-11-27 @ 13:43

https://gafferongames.com/post/fix_your_timestep/

a good article on clocking and fps and dealing with systems that drop fps etc.
--/\-[ Stu : Bloody Cactus :: http://kråketær.com :: http://mega-tokyo.com ]-/\--
User avatar
BloodyCactus
Oldbie
 
Posts: 624
Joined: 2016-2-03 @ 13:34
Location: Lexington VA

Re: x86 software vs hardware clocks

Postby superfury » 2017-11-27 @ 14:23

I've read that before, it's the basic principle I've implemented my emulator's core loop is based around(with clipping in case of it being too slow):

Code: Select all
OPTINLINE byte coreHandler()
{
   uint_32 MHZ14passed; //14 MHZ clock passed?
   byte BIOSMenuAllowed = 1; //Are we allowed to open the BIOS menu?
   //CPU execution, needs to be before the debugger!
   lock(LOCK_INPUT);
   if (unlikely((haswindowactive&0x1C)==0xC)) {getnspassed(&CPU_timing); haswindowactive|=0x10;} //Pending to finish Soundblaster!
   currenttiming += likely(haswindowactive&2)?getnspassed(&CPU_timing):0; //Check for any time that has passed to emulate! Don't emulate when not allowed to run, keeping emulation paused!
   unlock(LOCK_INPUT);
   uint_64 currentCPUtime = (uint_64)currenttiming; //Current CPU time to update to!
   uint_64 timeoutCPUtime = currentCPUtime+TIMEOUT_TIME; //We're timed out this far in the future (1ms)!

   double instructiontime,timeexecuted=0.0f; //How much time did the instruction last?
   byte timeout = TIMEOUT_INTERVAL; //Check every 10 instructions for timeout!
   for (;last_timing<currentCPUtime;) //CPU cycle loop for as many cycles as needed to get up-to-date!
   {
      if (debugger_thread)
      {
         if (threadRunning(debugger_thread)) //Are we running the debugger?
         {
            instructiontime = currentCPUtime - last_timing; //The instruction time is the total time passed!
            updateAudio(instructiontime); //Discard the time passed!
            timeexecuted += instructiontime; //Increase CPU executed time executed this block!
            last_timing += instructiontime; //Increase the last timepoint!
            goto skipCPUtiming; //OK, but skipped!
         }
      }
      if (BIOSMenuThread)
      {
         if (threadRunning(BIOSMenuThread)) //Are we running the BIOS menu and not permanently halted? Block our execution!
         {
            if ((CPU[activeCPU].halt&2)==0) //Are we allowed to be halted entirely?
            {
               instructiontime = currentCPUtime - last_timing; //The instruction time is the total time passed!
               updateAudio(instructiontime); //Discard the time passed!
               timeexecuted += instructiontime; //Increase CPU executed time executed this block!
               last_timing += instructiontime; //Increase the last timepoint!
               goto skipCPUtiming; //OK, but skipped!
            }
            BIOSMenuAllowed = 0; //We're running the BIOS menu! Don't open it again!
         }
      }
      if ((CPU[activeCPU].halt&2)==0) //Are we running normally(not partly ran without CPU from the BIOS menu)?
      {
         BIOSMenuThread = NULL; //We don't run the BIOS menu anymore!
      }

      if (allcleared) return 0; //Abort: invalid buffer!

      interruptsaved = 0; //Reset PIC interrupt to not used!
      if (!CPU[activeCPU].registers) //We need registers at this point, but have none to use?
      {
         return 0; //Invalid registers: abort, since we're invalid!
      }

      CPU_resetTimings(); //Reset all required CPU timings required!

      CPU_tickPendingReset();

      if ((CPU[activeCPU].halt&3) && (BIU_Ready() && CPU[activeCPU].resetPending==0)) //Halted normally with no reset pending? Don't count CGA wait states!
      {
         if (romsize) //Debug HLT?
         {
            MMU_dumpmemory("bootrom.dmp"); //Dump the memory to file!
            return 0; //Stop execution!
         }

         if (FLAG_IF && PICInterrupt() && ((CPU[activeCPU].halt&2)==0)) //We have an interrupt? Clear Halt State when allowed to!
         {
            CPU[activeCPU].halt = 0; //Interrupt->Resume from HLT
            goto resumeFromHLT; //We're resuming from HLT state!
         }
         else
         {
            //Execute using actual CPU clocks!
            CPU[activeCPU].cycles = 1; //HLT takes 1 cycle for now, since it's unknown!
         }
         if (CPU[activeCPU].halt==1) //Normal halt?
         {
            //Increase the instruction counter every instruction/HLT time!
            cpudebugger = needdebugger(); //Debugging information required? Refresh in case of external activation!
            if (cpudebugger) //Debugging?
            {
               debugger_beforeCPU(); //Make sure the debugger is prepared when needed!
               debugger_setcommand("<HLT>"); //We're a HLT state, so give the HLT command!
            }
            CPU[activeCPU].executed = 1; //For making the debugger execute correctly!
            //Increase the instruction counter every cycle/HLT time!
            debugger_step(); //Step debugger if needed, even during HLT state!
         }
      }
      else //We're not halted? Execute the CPU routines!
      {
         resumeFromHLT:
         if (CPU[activeCPU].instructionfetch.CPU_isFetching && (CPU[activeCPU].instructionfetch.CPU_fetchphase==1)) //We're starting a new instruction?
         {
            if (CPU[activeCPU].registers && doEMUsinglestep && allow_debuggerstep) //Single step enabled and allowed?
            {
               if (getcpumode() == (doEMUsinglestep - 1)) //Are we the selected CPU mode?
               {
                  switch (getcpumode()) //What CPU mode are we to debug?
                  {
                  case CPU_MODE_REAL: //Real mode?
                     singlestep |= ((CPU[activeCPU].registers->CS == (((singlestepaddress >> 16)&0xFFFF)) && ((CPU[activeCPU].registers->IP == (singlestepaddress & 0xFFFF))||(singlestepaddress&0x1000000000000ULL)))||(singlestepaddress&0x2000000000000ULL)); //Single step enabled?
                     break;
                  case CPU_MODE_PROTECTED: //Protected mode?
                  case CPU_MODE_8086: //Virtual 8086 mode?
                     singlestep |= ((CPU[activeCPU].registers->CS == (((singlestepaddress >> 32)&0xFFFF)) && ((CPU[activeCPU].registers->EIP == (singlestepaddress & 0xFFFFFFFF))||(singlestepaddress&0x1000000000000ULL)))||(singlestepaddress&0x2000000000000ULL)); //Single step enabled?
                     break;
                  default: //Invalid mode?
                     break;
                  }
               }
            }

            cpudebugger = needdebugger(); //Debugging information required? Refresh in case of external activation!
            MMU_logging = debugger_logging(); //Are we logging?

            HWINT_saved = 0; //No HW interrupt by default!
            CPU_beforeexec(); //Everything before the execution!
            if ((!CPU[activeCPU].trapped) && CPU[activeCPU].registers && CPU[activeCPU].allowInterrupts && (CPU[activeCPU].permanentreset==0) && (CPU[activeCPU].internalinterruptstep==0) && BIU_Ready() && (CPU_executionphase_busy()==0)) //Only check for hardware interrupts when not trapped and allowed to execute interrupts(not permanently reset)!
            {
               if (FLAG_IF) //Interrupts available?
               {
                  if (PICInterrupt()) //We have a hardware interrupt ready?
                  {
                     HWINT_nr = nextintr(); //Get the HW interrupt nr!
                     HWINT_saved = 2; //We're executing a HW(PIC) interrupt!
                     if (!((EMULATED_CPU <= CPU_80286) && REPPending)) //Not 80386+, REP pending and segment override?
                     {
                        CPU_8086REPPending(); //Process pending REPs normally as documented!
                     }
                     else //Execute the CPU bug!
                     {
                        CPU_8086REPPending(); //Process pending REPs normally as documented!
                        CPU[activeCPU].registers->EIP = CPU_InterruptReturn; //Use the special interrupt return address to return to the last prefix instead of the start!
                     }
                     CPU_exec_lastCS = CPU_exec_CS;
                     CPU_exec_lastEIP = CPU_exec_EIP;
                     CPU_exec_CS = CPU[activeCPU].registers->CS; //Save for error handling!
                     CPU_exec_EIP = CPU[activeCPU].registers->EIP; //Save for error handling!
                     CPU_saveFaultData(); //Save fault data to go back to when exceptions occur!
                     call_hard_inthandler(HWINT_nr); //get next interrupt from the i8259, if any!
                  }
               }
            }

            #ifdef LOG_BOGUS
            uint_32 addr_start, addr_left, curaddr; //Start of the currently executing instruction in real memory! We're testing 5 instructions!
            addr_left=2*LOG_BOGUS;
            curaddr = 0;
            addr_start = CPU_MMU_start(CPU_SEGMENT_CS,CPU[activeCPU].registers->CS); //Base of the currently executing block!
            addr_start += REG_EIP; //Add the address for the address we're executing!
         
            for (;addr_left;++curaddr) //Test all addresses!
            {
               if (MMU_directrb_realaddr(addr_start+curaddr)) //Try to read the opcode! Anything found(not 0000h instruction)?
               {
                  break; //Stop searching!
               }
               --addr_left; //Tick one address checked!
            }
            if (addr_left==0) //Bogus memory detected?
            {
               dolog("bogus","Bogus exection memory detected(%u 0000h opcodes) at %04X:%08X! Previous instruction: %02X(0F:%u)@%04X:%08X",LOG_BOGUS,CPU[activeCPU].registers->CS,CPU[activeCPU].registers->EIP,CPU[activeCPU].previousopcode,CPU[activeCPU].previousopcode0F,CPU_exec_lastCS,CPU_exec_lastEIP); //Log the warning of entering bogus memory!
            }
            #endif
         }

         CPU_exec(); //Run CPU!

         //Increase the instruction counter every cycle/HLT time!
         debugger_step(); //Step debugger if needed!
         if (CPU[activeCPU].executed) //Are we executed?
         {
            CB_handleCallbacks(); //Handle callbacks after CPU/debugger usage!
         }
      }

      //Update current timing with calculated cycles we've executed!
      if (likely(useIPSclock==0)) //Use cycle-accurate clock?
      {
         instructiontime = CPU[activeCPU].cycles*CPU_speed_cycle; //Increase timing with the instruction time!
      }
      else
      {
         instructiontime = CPU[activeCPU].executed*CPU_speed_cycle; //Increase timing with the instruction time!
      }
      last_timing += instructiontime; //Increase CPU time executed!
      timeexecuted += instructiontime; //Increase CPU executed time executed this block!

      //Tick 14MHz master clock, for basic hardware using it!
      MHZ14_ticktiming += instructiontime; //Add time to the 14MHz master clock!
      if (likely(MHZ14_ticktiming<MHZ14tick)) //To not tick some 14MHz clocks? This ix the case with most faster CPUs!
      {
         MHZ14passed = 0; //No time has passed on the 14MHz Master clock!
      }
      else
      {
         MHZ14passed = (uint_32)(MHZ14_ticktiming/MHZ14tick); //Tick as many as possible!
         MHZ14_ticktiming -= MHZ14tick*(float)MHZ14passed; //Rest the time passed!
      }

      MMU_logging |= 2; //Are we logging hardware memory accesses(DMA etc)?
      if (likely((CPU[activeCPU].halt&0x10)==0)) tickPIT(instructiontime,MHZ14passed); //Tick the PIT as much as we need to keep us in sync when running!
      if (unlikely(MHZ14passed)) //14MHz to be ticked?
      {
         updateDMA(MHZ14passed); //Update the DMA timer!
         if (unlikely(useAdlib)) updateAdlib(MHZ14passed); //Tick the adlib timer if needed!
      }
      updateMouse(instructiontime); //Tick the mouse timer if needed!
      stepDROPlayer(instructiontime); //DRO player playback, if any!
      updateMIDIPlayer(instructiontime); //MIDI player playback, if any!
      updatePS2Keyboard(instructiontime); //Tick the PS/2 keyboard timer, if needed!
      updatePS2Mouse(instructiontime); //Tick the PS/2 mouse timer, if needed!
      update8042(instructiontime); //Tick the PS/2 mouse timer, if needed!
      updateCMOS(instructiontime); //Tick the CMOS, if needed!
      updateFloppy(instructiontime); //Update the floppy!
      updateMPUTimer(instructiontime); //Update the MPU timing!
      if (useGameBlaster && ((CPU[activeCPU].halt&0x10)==0)) updateGameBlaster(instructiontime,MHZ14passed); //Tick the Game Blaster timer if needed and running!
      if (useSoundBlaster && ((CPU[activeCPU].halt&0x10)==0)) updateSoundBlaster(instructiontime,MHZ14passed); //Tick the Sound Blaster timer if needed and running!
      updateATA(instructiontime); //Update the ATA timer!
      tickParallel(instructiontime); //Update the Parallel timer!
      updateUART(instructiontime); //Update the UART timer!
      if (useLPTDAC && ((CPU[activeCPU].halt&0x10)==0)) tickssourcecovox(instructiontime); //Update the Sound Source / Covox Speech Thing if needed!
      if (likely((CPU[activeCPU].halt&0x10)==0)) updateVGA(instructiontime); //Update the VGA timer when running!
      updateModem(instructiontime); //Update the modem!
      updateJoystick(instructiontime); //Update the Joystick!
      updateAudio(instructiontime); //Update the general audio processing!
      BIOSROM_updateTimers(instructiontime); //Update any ROM(Flash ROM) timers!
      MMU_logging &= ~2; //Are we logging hardware memory accesses again?
      if (--timeout==0) //Timed out?
      {
         timeout = TIMEOUT_INTERVAL; //Reset the timeout to check the next time!
         currenttiming += getnspassed(&CPU_timing); //Check for passed time!
         if (currenttiming >= timeoutCPUtime) break; //Timeout? We're not fast enough to run at full speed!
      }
   } //CPU cycle loop!

   skipCPUtiming: //Audio emulation only?
   //Slowdown to requested speed if needed!
   currenttiming += getnspassed(&CPU_timing); //Add real time!
   for (;unlikely(currenttiming < last_timing);) //Not enough time spent on instructions?
   {
      currenttiming += getnspassed(&CPU_timing); //Add to the time to wait!
      delay(0); //Update to current time every instruction according to cycles passed!
   }

   float temp;
   temp = (float)MAX(last_timing,currenttiming); //Save for substraction(time executed in real time)!
   last_timing -= temp; //Keep the CPU timing within limits!
   currenttiming -= temp; //Keep the current timing within limits!

   timeemulated += timeexecuted; //Add timing for the CPU percentage to update!

   updateKeyboard(timeexecuted); //Tick the keyboard timer if needed!

   //Check for BIOS menu!
   if ((psp_keypressed(BUTTON_SELECT) || (Settings_request==1)) && (BIOSMenuThread==NULL) && (debugger_thread==NULL)) //Run in-emulator BIOS menu requested while running?
   {
      if ((!is_gamingmode() && !Direct_Input && BIOSMenuAllowed) || (Settings_request==1)) //Not gaming/direct input mode and allowed to open it(not already started)?
      {
         skipstep = 3; //Skip while stepping? 1=repeating, 2=EIP destination, 3=Stop asap.
         lock(LOCK_INPUT);
         Settings_request = 0; //We're handling the request!
         unlock(LOCK_INPUT);
         BIOSMenuThread = startThread(&BIOSMenuExecution,"UniPCemu_BIOSMenu",NULL); //Start the BIOS menu thread!
         delay(0); //Wait a bit for the thread to start up!
      }
   }
   return 1; //OK!
}


Essentially, the accumulator loop is handled for the entire machine based on realtime, as well as for every hardware based on emulated CPU time(clock ticks time tick duration, which is translated to 14MHz or taken directly(as nanoseconds the CPU has ticked instead of 14MHz cycles ticked), depending on the hardware). Stuff like sound rendering itself(rendering to the sound buffers from emulated sound samples) are done based on the 14MHz clock, while the rendering to the actual sound buffers from those buffers(double buffering) as well as input uses the nanosecond timer instead.

Edit: I've made a little improvement, which allows EGA and CGA to use the 14MHz clock directly, for a small speedup:
Code: Select all
OPTINLINE byte coreHandler()
{
   uint_32 MHZ14passed; //14 MHZ clock passed?
   byte BIOSMenuAllowed = 1; //Are we allowed to open the BIOS menu?
   //CPU execution, needs to be before the debugger!
   lock(LOCK_INPUT);
   if (unlikely((haswindowactive&0x1C)==0xC)) {getnspassed(&CPU_timing); haswindowactive|=0x10;} //Pending to finish Soundblaster!
   currenttiming += likely(haswindowactive&2)?getnspassed(&CPU_timing):0; //Check for any time that has passed to emulate! Don't emulate when not allowed to run, keeping emulation paused!
   unlock(LOCK_INPUT);
   uint_64 currentCPUtime = (uint_64)currenttiming; //Current CPU time to update to!
   uint_64 timeoutCPUtime = currentCPUtime+TIMEOUT_TIME; //We're timed out this far in the future (1ms)!

   double instructiontime,timeexecuted=0.0f; //How much time did the instruction last?
   byte timeout = TIMEOUT_INTERVAL; //Check every 10 instructions for timeout!
   for (;last_timing<currentCPUtime;) //CPU cycle loop for as many cycles as needed to get up-to-date!
   {
      if (debugger_thread)
      {
         if (threadRunning(debugger_thread)) //Are we running the debugger?
         {
            instructiontime = currentCPUtime - last_timing; //The instruction time is the total time passed!
            updateAudio(instructiontime); //Discard the time passed!
            timeexecuted += instructiontime; //Increase CPU executed time executed this block!
            last_timing += instructiontime; //Increase the last timepoint!
            goto skipCPUtiming; //OK, but skipped!
         }
      }
      if (BIOSMenuThread)
      {
         if (threadRunning(BIOSMenuThread)) //Are we running the BIOS menu and not permanently halted? Block our execution!
         {
            if ((CPU[activeCPU].halt&2)==0) //Are we allowed to be halted entirely?
            {
               instructiontime = currentCPUtime - last_timing; //The instruction time is the total time passed!
               updateAudio(instructiontime); //Discard the time passed!
               timeexecuted += instructiontime; //Increase CPU executed time executed this block!
               last_timing += instructiontime; //Increase the last timepoint!
               goto skipCPUtiming; //OK, but skipped!
            }
            BIOSMenuAllowed = 0; //We're running the BIOS menu! Don't open it again!
         }
      }
      if ((CPU[activeCPU].halt&2)==0) //Are we running normally(not partly ran without CPU from the BIOS menu)?
      {
         BIOSMenuThread = NULL; //We don't run the BIOS menu anymore!
      }

      if (allcleared) return 0; //Abort: invalid buffer!

      interruptsaved = 0; //Reset PIC interrupt to not used!
      if (!CPU[activeCPU].registers) //We need registers at this point, but have none to use?
      {
         return 0; //Invalid registers: abort, since we're invalid!
      }

      CPU_resetTimings(); //Reset all required CPU timings required!

      CPU_tickPendingReset();

      if ((CPU[activeCPU].halt&3) && (BIU_Ready() && CPU[activeCPU].resetPending==0)) //Halted normally with no reset pending? Don't count CGA wait states!
      {
         if (romsize) //Debug HLT?
         {
            MMU_dumpmemory("bootrom.dmp"); //Dump the memory to file!
            return 0; //Stop execution!
         }

         if (FLAG_IF && PICInterrupt() && ((CPU[activeCPU].halt&2)==0)) //We have an interrupt? Clear Halt State when allowed to!
         {
            CPU[activeCPU].halt = 0; //Interrupt->Resume from HLT
            goto resumeFromHLT; //We're resuming from HLT state!
         }
         else
         {
            //Execute using actual CPU clocks!
            CPU[activeCPU].cycles = 1; //HLT takes 1 cycle for now, since it's unknown!
         }
         if (CPU[activeCPU].halt==1) //Normal halt?
         {
            //Increase the instruction counter every instruction/HLT time!
            cpudebugger = needdebugger(); //Debugging information required? Refresh in case of external activation!
            if (cpudebugger) //Debugging?
            {
               debugger_beforeCPU(); //Make sure the debugger is prepared when needed!
               debugger_setcommand("<HLT>"); //We're a HLT state, so give the HLT command!
            }
            CPU[activeCPU].executed = 1; //For making the debugger execute correctly!
            //Increase the instruction counter every cycle/HLT time!
            debugger_step(); //Step debugger if needed, even during HLT state!
         }
      }
      else //We're not halted? Execute the CPU routines!
      {
         resumeFromHLT:
         if (CPU[activeCPU].instructionfetch.CPU_isFetching && (CPU[activeCPU].instructionfetch.CPU_fetchphase==1)) //We're starting a new instruction?
         {
            if (CPU[activeCPU].registers && doEMUsinglestep && allow_debuggerstep) //Single step enabled and allowed?
            {
               if (getcpumode() == (doEMUsinglestep - 1)) //Are we the selected CPU mode?
               {
                  switch (getcpumode()) //What CPU mode are we to debug?
                  {
                  case CPU_MODE_REAL: //Real mode?
                     singlestep |= ((CPU[activeCPU].registers->CS == (((singlestepaddress >> 16)&0xFFFF)) && ((CPU[activeCPU].registers->IP == (singlestepaddress & 0xFFFF))||(singlestepaddress&0x1000000000000ULL)))||(singlestepaddress&0x2000000000000ULL)); //Single step enabled?
                     break;
                  case CPU_MODE_PROTECTED: //Protected mode?
                  case CPU_MODE_8086: //Virtual 8086 mode?
                     singlestep |= ((CPU[activeCPU].registers->CS == (((singlestepaddress >> 32)&0xFFFF)) && ((CPU[activeCPU].registers->EIP == (singlestepaddress & 0xFFFFFFFF))||(singlestepaddress&0x1000000000000ULL)))||(singlestepaddress&0x2000000000000ULL)); //Single step enabled?
                     break;
                  default: //Invalid mode?
                     break;
                  }
               }
            }

            cpudebugger = needdebugger(); //Debugging information required? Refresh in case of external activation!
            MMU_logging = debugger_logging(); //Are we logging?

            HWINT_saved = 0; //No HW interrupt by default!
            CPU_beforeexec(); //Everything before the execution!
            if ((!CPU[activeCPU].trapped) && CPU[activeCPU].registers && CPU[activeCPU].allowInterrupts && (CPU[activeCPU].permanentreset==0) && (CPU[activeCPU].internalinterruptstep==0) && BIU_Ready() && (CPU_executionphase_busy()==0)) //Only check for hardware interrupts when not trapped and allowed to execute interrupts(not permanently reset)!
            {
               if (FLAG_IF) //Interrupts available?
               {
                  if (PICInterrupt()) //We have a hardware interrupt ready?
                  {
                     HWINT_nr = nextintr(); //Get the HW interrupt nr!
                     HWINT_saved = 2; //We're executing a HW(PIC) interrupt!
                     if (!((EMULATED_CPU <= CPU_80286) && REPPending)) //Not 80386+, REP pending and segment override?
                     {
                        CPU_8086REPPending(); //Process pending REPs normally as documented!
                     }
                     else //Execute the CPU bug!
                     {
                        CPU_8086REPPending(); //Process pending REPs normally as documented!
                        CPU[activeCPU].registers->EIP = CPU_InterruptReturn; //Use the special interrupt return address to return to the last prefix instead of the start!
                     }
                     CPU_exec_lastCS = CPU_exec_CS;
                     CPU_exec_lastEIP = CPU_exec_EIP;
                     CPU_exec_CS = CPU[activeCPU].registers->CS; //Save for error handling!
                     CPU_exec_EIP = CPU[activeCPU].registers->EIP; //Save for error handling!
                     CPU_saveFaultData(); //Save fault data to go back to when exceptions occur!
                     call_hard_inthandler(HWINT_nr); //get next interrupt from the i8259, if any!
                  }
               }
            }

            #ifdef LOG_BOGUS
            uint_32 addr_start, addr_left, curaddr; //Start of the currently executing instruction in real memory! We're testing 5 instructions!
            addr_left=2*LOG_BOGUS;
            curaddr = 0;
            addr_start = CPU_MMU_start(CPU_SEGMENT_CS,CPU[activeCPU].registers->CS); //Base of the currently executing block!
            addr_start += REG_EIP; //Add the address for the address we're executing!
         
            for (;addr_left;++curaddr) //Test all addresses!
            {
               if (MMU_directrb_realaddr(addr_start+curaddr)) //Try to read the opcode! Anything found(not 0000h instruction)?
               {
                  break; //Stop searching!
               }
               --addr_left; //Tick one address checked!
            }
            if (addr_left==0) //Bogus memory detected?
            {
               dolog("bogus","Bogus exection memory detected(%u 0000h opcodes) at %04X:%08X! Previous instruction: %02X(0F:%u)@%04X:%08X",LOG_BOGUS,CPU[activeCPU].registers->CS,CPU[activeCPU].registers->EIP,CPU[activeCPU].previousopcode,CPU[activeCPU].previousopcode0F,CPU_exec_lastCS,CPU_exec_lastEIP); //Log the warning of entering bogus memory!
            }
            #endif
         }

         CPU_exec(); //Run CPU!

         //Increase the instruction counter every cycle/HLT time!
         debugger_step(); //Step debugger if needed!
         if (CPU[activeCPU].executed) //Are we executed?
         {
            CB_handleCallbacks(); //Handle callbacks after CPU/debugger usage!
         }
      }

      //Update current timing with calculated cycles we've executed!
      if (likely(useIPSclock==0)) //Use cycle-accurate clock?
      {
         instructiontime = CPU[activeCPU].cycles*CPU_speed_cycle; //Increase timing with the instruction time!
      }
      else
      {
         instructiontime = CPU[activeCPU].executed*CPU_speed_cycle; //Increase timing with the instruction time!
      }
      last_timing += instructiontime; //Increase CPU time executed!
      timeexecuted += instructiontime; //Increase CPU executed time executed this block!

      //Tick 14MHz master clock, for basic hardware using it!
      MHZ14_ticktiming += instructiontime; //Add time to the 14MHz master clock!
      if (likely(MHZ14_ticktiming<MHZ14tick)) //To not tick some 14MHz clocks? This ix the case with most faster CPUs!
      {
         MHZ14passed = 0; //No time has passed on the 14MHz Master clock!
      }
      else
      {
         MHZ14passed = (uint_32)(MHZ14_ticktiming/MHZ14tick); //Tick as many as possible!
         MHZ14_ticktiming -= MHZ14tick*(float)MHZ14passed; //Rest the time passed!
      }

      MMU_logging |= 2; //Are we logging hardware memory accesses(DMA etc)?
      double MHZ14passed_ns;
      if (unlikely(MHZ14passed)) //14MHz to be ticked?
      {
         MHZ14passed_ns = MHZ14passed*MHZ14tick; //Actual ns ticked!
         updateDMA(MHZ14passed); //Update the DMA timer!
         if (likely((CPU[activeCPU].halt&0x10)==0)) tickPIT(MHZ14passed_ns,MHZ14passed); //Tick the PIT as much as we need to keep us in sync when running!
         if (unlikely(useAdlib)) updateAdlib(MHZ14passed); //Tick the adlib timer if needed!
         updateMouse(MHZ14passed_ns); //Tick the mouse timer if needed!
         stepDROPlayer(MHZ14passed_ns); //DRO player playback, if any!
         updateMIDIPlayer(MHZ14passed_ns); //MIDI player playback, if any!
         updatePS2Keyboard(MHZ14passed_ns); //Tick the PS/2 keyboard timer, if needed!
         updatePS2Mouse(MHZ14passed_ns); //Tick the PS/2 mouse timer, if needed!
         update8042(MHZ14passed_ns); //Tick the PS/2 mouse timer, if needed!
         updateCMOS(MHZ14passed_ns); //Tick the CMOS, if needed!
         updateFloppy(MHZ14passed_ns); //Update the floppy!
         updateMPUTimer(MHZ14passed_ns); //Update the MPU timing!
         if (useGameBlaster && ((CPU[activeCPU].halt&0x10)==0)) updateGameBlaster(MHZ14passed_ns,MHZ14passed); //Tick the Game Blaster timer if needed and running!
         if (useSoundBlaster && ((CPU[activeCPU].halt&0x10)==0)) updateSoundBlaster(MHZ14passed_ns,MHZ14passed); //Tick the Sound Blaster timer if needed and running!
         updateATA(MHZ14passed_ns); //Update the ATA timer!
         tickParallel(MHZ14passed_ns); //Update the Parallel timer!
         updateUART(MHZ14passed_ns); //Update the UART timer!
         if (useLPTDAC && ((CPU[activeCPU].halt&0x10)==0)) tickssourcecovox(MHZ14passed_ns); //Update the Sound Source / Covox Speech Thing if needed!
         if (likely((CPU[activeCPU].halt&0x10)==0)) updateVGA(0.0,MHZ14passed); //Update the video 14MHz timer, when running!
      }
      if (likely((CPU[activeCPU].halt&0x10)==0)) updateVGA(instructiontime,0); //Update the normal video timer, when running!
      if (unlikely(MHZ14passed))
      {
         updateModem(MHZ14passed_ns); //Update the modem!
         updateJoystick(MHZ14passed_ns); //Update the Joystick!
         updateAudio(MHZ14passed_ns); //Update the general audio processing!
         BIOSROM_updateTimers(MHZ14passed_ns); //Update any ROM(Flash ROM) timers!
      }
      MMU_logging &= ~2; //Are we logging hardware memory accesses again?
      if (--timeout==0) //Timed out?
      {
         timeout = TIMEOUT_INTERVAL; //Reset the timeout to check the next time!
         currenttiming += getnspassed(&CPU_timing); //Check for passed time!
         if (currenttiming >= timeoutCPUtime) break; //Timeout? We're not fast enough to run at full speed!
      }
   } //CPU cycle loop!

   skipCPUtiming: //Audio emulation only?
   //Slowdown to requested speed if needed!
   currenttiming += getnspassed(&CPU_timing); //Add real time!
   for (;unlikely(currenttiming < last_timing);) //Not enough time spent on instructions?
   {
      currenttiming += getnspassed(&CPU_timing); //Add to the time to wait!
      delay(0); //Update to current time every instruction according to cycles passed!
   }

   float temp;
   temp = (float)MAX(last_timing,currenttiming); //Save for substraction(time executed in real time)!
   last_timing -= temp; //Keep the CPU timing within limits!
   currenttiming -= temp; //Keep the current timing within limits!

   timeemulated += timeexecuted; //Add timing for the CPU percentage to update!

   updateKeyboard(timeexecuted); //Tick the keyboard timer if needed!

   //Check for BIOS menu!
   if ((psp_keypressed(BUTTON_SELECT) || (Settings_request==1)) && (BIOSMenuThread==NULL) && (debugger_thread==NULL)) //Run in-emulator BIOS menu requested while running?
   {
      if ((!is_gamingmode() && !Direct_Input && BIOSMenuAllowed) || (Settings_request==1)) //Not gaming/direct input mode and allowed to open it(not already started)?
      {
         skipstep = 3; //Skip while stepping? 1=repeating, 2=EIP destination, 3=Stop asap.
         lock(LOCK_INPUT);
         Settings_request = 0; //We're handling the request!
         unlock(LOCK_INPUT);
         BIOSMenuThread = startThread(&BIOSMenuExecution,"UniPCemu_BIOSMenu",NULL); //Start the BIOS menu thread!
         delay(0); //Wait a bit for the thread to start up!
      }
   }
   return 1; //OK!
}


The remaining time is added back into the loop at various points: both using the old accumulated value and adding the new time difference(in nanoseconds) and within gettimepassed_ns(), which takes the delta time, gives the nanosecond time(in whole ns), and stores the remainder back for usage in the next loop, allowing timingss to count in different units(nanoseconds, microseconds, milliseconds) while keeping the rate semi-constant(e.g. one second passed reported is actually one seconds, no exceptions(except the remainder, which is never directly given)).

For more information on that, see the high resolution clock support:
https://bitbucket.org/superfury/unipcem ... ?at=master

The main problem that's left is that the core code(the graphics card emulation and CPU emulation in particular) is so heavy, that it's taking way more time to emulate than actual time passes(that's why it's reporting 18-20% at most running on a i7 CPU: it's spending about 5 times as much time emulating the system as actual realtime passes(which is a 16MHz 80386 with VGA hardware as well as all audio hardware that's supported(which basically don't take up any time, according to the profiler))). It's about 30% CPU emulation(CPU/BIU to be exact), 18% VGA and remainder is hardware and rendering to the screen surface(including text surfaces for buttons etc.).
Other than those, it's 5.2% updateAudio, 4% updateSoundBlaster, 27.8% other?
superfury
l33t
 
Posts: 2267
Joined: 2014-3-08 @ 11:25
Location: Netherlands

Re: x86 software vs hardware clocks

Postby elianda » 2017-12-08 @ 05:44

About the ET3000: A simple look at the cards oscillators shows all base frequencies:
http://retronn.de/imports/hwgal/hw_tseng_et3000.html
and this one uses 40 MHz instead of 36 MHz: http://retronn.de/imports/hwgal/hw_grap ... front.html
Retronn.de - Vintage Hardware Gallery, Drivers, Guides, HQ Videos.
Youtube Channel
FTP Server - Driver Archive and more
DVI2PCIe alignment and 2D image quality measurement tool
User avatar
elianda
l33t
 
Posts: 2215
Joined: 2006-4-21 @ 16:56
Location: Halle / Germany

Next

Return to PC Emulation

Who is online

Users browsing this forum: No registered users and 3 guests