First post, by superfury
I notice that once Windows 95 setup starts detecting the hardware and the hardware is a ET4000(in UniPCemu), somehow the 4-bit color depth(of the screen) is changed to a 8-bit color depth? Everything is still setup as a 4-bit color depth(seeing as the screen becomes all weird in colors), but the display height doesn't change(evidenced by the virtual screen width being 0x50 in the row width calculations, thus 80 bytes(*4)=320 bytes per row).
I do notice that the attribute controller is in 8-bit color mode somehow(thus latching until every other 4-bit pixel to create an 8-bit pixel)? The color enable 8-bit setting is still 0(thus 4-bit input), but the high nibble of Attribute Controller register 0x16 is 0x2(bit 5 set, bit 4 cleared), thus enforcing the controller to handle attributes like the color enable 8-bit is set? Is that correct behaviour?
Edit: The register 16h's value is 0x20 now. Perhaps the value in bits 4-5 are forced to 0(compatibility mode) when bit 7 isn't set(Bypass pallette)? Seeing as the high-color mode makes no sense when the palette isn't bypassed?
Edit: Fixing said graphics card bug makes it properly count said mode. In UniPCemu's case, 0x20==0x00 for said register(to be exact, cases 00b and 10b are identical(VGA compatible mode).
Edit: Whoops. Just looked at the documentation for the ET4000 again. For some odd reason, it lists the bits in reversed order during the description of bits 4&5. Instead of listing it in plain binary order (00b, 01b, 10b, 11b; as <bit5><bit4>b) it lists the bits reversed(00b, 10b, 01b, 11b; as <bit4><bit5>)? That's confusing, seeing as all other documentation lists said information in the correct order(first the high bit, then the low bit).