VOGONS


First post, by purple_toupee

User metadata
Rank Newbie
Rank
Newbie

This is a weird one. I'm writing a program (in Watcom C/Dos4G/W) that uses mode 0x13. It doesn't do anything fancy, but it only works on SOME of my real hardware.

What it does, in my simplified repro case:

  1. reprogram the PIT timer channel 0 to use for timing
  2. Switch to VGA mode 0x13 using int 0x10,0 -- no other VGA setup is done.
  3. Wait for vsync start, then vsync end (read bytes from port 0x3DA until bit 3 turns on, then off)
  4. disable interrupts
  5. record clock time
  6. again wait for vsync start and end
  7. record clock time elapsed
  8. enable interrupts
  9. printf something (so, bios text output)
  10. wait for key using a tight kbhit/getch loop

(Basically, I was testing frame time to test the code & assumptions.)

This works fine on DOSBox. It worked fine on my 486 back when I had an 8-bit ATI card in it. Last week, I finally upgraded to a VLB Genoa card. As soon as I change modes, the monitor loses sync. (I am using a VGA->HDMI adapter.) I've tested the new video card with a variety of commercial games; I also used debug to switch modes to 0x13 and everything was fine in DOS.

Here's the wacky thing. If I don't do the whole measurement block -- so I never look at 3da -- then things work fine. What???

I don't THINK it's a hardware issue. I'm betting that EITHER I am messing up the vsync logic, OR I am just failing to understand something about programming the VGA. I'm hoping someone here has guru-level knowledge to point out my error -- or at least suggest a direction of investigation. I'm kind of stumped.

Here's some of the code, in case it's obvious what I did wrong.

extern void DelayInstruction();
#pragma aux DelayInstruction = \
"jmp _label" \
"_label:"

// necessary because of Watcom's stupid 32-bit-wide API
uint8 InPortB(uint16 portNum) { return (uint8)inp(portNum); }

bool IsVerticalRetraceActive()
{
const uint8 vgaStatus = InPortB(0x3DA);
return (vgaStatus & uint8(1 << 3)) != 0;
}

void WaitForVerticalRetraceStart()
{
while (!IsVerticalRetraceActive())
{
DelayInstruction();
}
}

void WaitForVerticalRetraceStop()
{
while (IsVerticalRetraceActive())
{
DelayInstruction();
}
}

void VerticalSync()
{
WaitForVerticalRetraceStart();
WaitForVerticalRetraceStop();
}

//
// main:
//

{
VerticalSync();
_disable();
const uint32 ticks = CHWClock::GetTicks();
VerticalSync();
const uint32 after = CHWClock::GetTicks();
_enable();

const uint32 delta = after - tocks;
// (log the result)
}

printf("ready. hit a key.\n");
while (!kbhit()) {}
while (kbhit()) { getch(); }

Reply 1 of 16, by Tiido

User metadata
Rank l33t
Rank
l33t

I would have thought it is the VGA to HDMI adapter is at fault here, many of the old video cards do not output timings that digitization processes of such adaptors (and LCD monitors etc.) expect, leading to issues like this but sinec it works with games etc. and when you omit some stuff it has to be something else...

Does the sync come back when the program exits or changes mode back to text ? Does timer freq play a role ? I.e if you leave it unprogrammed, does anything change (I am thinking maybe the video BIOS relies on timer to be at default freq, however unlikely it may be) ?

T-04YBSC, a new YMF71x based sound card & Official VOGONS thread about it
Newly made 4MB 60ns 30pin SIMMs ~
mida sa loed ? nagunii aru ei saa 😜

Reply 2 of 16, by purple_toupee

User metadata
Rank Newbie
Rank
Newbie

Ah, these are good thoughts. Sync does return when I restore the previous mode.

I think I'll try two experiments later, and I'll report back -- I will test without messing with the PIT, and I will also see what happens if I don't restore the mode; does it start working when DOS is driving it?

Thanks!

Reply 3 of 16, by darry

User metadata
Rank l33t++
Rank
l33t++

A logic analyzer or oscilloscope might help debug this from a VGA output point-of-view .

Probably not worth investing in one if this the only use case .

Reply 4 of 16, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Mode 13h without modifications is MCGA, rather than VGA (it also offers mode 11h)..
All the intelligence of VGA is missing in that mode, it's more like a frame buffer..

Anyway, I don't mean to be nitpicking here. What I mean to express is something else:
For troubleshooting, it's possible to try getting the code to run on plain MCGA machines first (IBM PS/2 models with 8086).

If it works there, it should work on any VGA card, as well.
PCem/86Box can provide PS/2 machine settings, for example.
Unfortunately, there's no support for 386 code.

On real hardware, an accelerator board could be installed to replace 8088/8086 by a real processor.
Those boards with 80286 processors were popular, but 80386 versions not so much.
Here's an example, though.: https://www.computinghistory.org.uk/det/34848 … ce-Accelerator/

Anyway, it just came to mind. It's maybe not the most practical solutions here, I admit.
Unless a PS/2 Model 30 (8086) is somewhere in the house, already. Edited.

Edit: Another idea. Maybe the VGA BIOS of the VLB card differs from the PC bus counterpart?
An 8-Bit card seems to be designed to run in 16-Bit PCs and ATs, while VL bus was 386+ exclusive.
So a BIOS developer could safely drop 16-Bit instruction compatibility and use 386/486 real-mode instructions for performance.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 5 of 16, by purple_toupee

User metadata
Rank Newbie
Rank
Newbie

Man, this is the weirdest thing. I have a truly minimal repro. I'll post the whole main.cpp below.

In short -- I can change to the mode, everything is fine, but the *second* that I read port 3DA -- a *single time, one byte, * the monitor just shuts off.

So, to be specific, in this source code, I see "ready. hit a key." in that unmistakable, fat, mode 13 bios font. I hit a key, and the screen goes black and my monitor display an "OUT OF RANGE" message. When I hit a key, the program switches to mode 3 and the display starts working again.

If I run it with an argument -- to suppress the switch to mode 3 -- then sync never returns until I switch modes somehow.

This is so weird. It has to be either something super idiosyncratic (and maybe broken) with the Genoa card, or a shortcoming of the adapter, right? I'll test against my old DB15 LCD tomorrow.

Thanks for the thoughts everyone. Sadly, I do not have a scope, and I have no other era VGA cards convenient to test against.

@jo22: I'm sure the VLB card differs in many ways, including the bios -- but like this? seems so weird. Like...reading 3DA should be pretty common for any 8, 16, or 32-bit VGA compatible adapter to support, right?

(edited to remove unused code)

#include <conio.h>
#include <dos.h>
#include <i86.h>
#include <stdio.h>
#include <string.h>

bool IsVerticalRetraceActive()
{
const int vgaStatus = inp(0x3DA);
return (vgaStatus & (unsigned char)(1 << 3)) ? true : false;
}

void SetVideoMode(unsigned char modeNum)
{
REGS regs;
memset(&regs, 0, sizeof(regs));
regs.x.eax = modeNum; // AH=00: Set mode to AL.
int386(0x10, &regs, &regs);
}

int main(int argc, char* argv[])
{
(void)argv;

SetVideoMode(0x13);

printf("ready. hit a key.\n");
while (!kbhit()) {}
while (kbhit()) { getch(); }

IsVerticalRetraceActive();


printf("ready again. hit a key.\n");
while (!kbhit()) {}
while (kbhit()) { getch(); }

if (argc <= 1)
{
SetVideoMode(3);
}

return 0;
}

Reply 6 of 16, by Jo22

User metadata
Rank l33t++
Rank
l33t++
purple_toupee wrote on 2023-09-14, 06:07:

@jo22: I'm sure the VLB card differs in many ways, including the bios -- but like this? seems so weird. Like...reading 3DA should be pretty common for any 8, 16, or 32-bit VGA compatible adapter to support, right?

Your guess is as good as mine, I'm not entirely sure, either. 🤷
Hm. Maybe it's somehow related to Protected-Mode <> Real-Mode switching and the Watcom compiler?
The basic VGA BIOS is using a Real-Mode interface, so DOS4GW or any other Extender has to assist in translation.
Maybe there's something deep inside having a little problem with V86 in this case, maybe something related to interrupt handling, a timing issue etc.
I'm sorry that I can't be more precise here, that kind of VGA programming isn't one of my specialties, I'm afraid. It's more of a wild guess. 🙁

Edit: Hm, maybe DOS32A can be used in place of DOS4GW, for testing purposes ?
Just for troubleshooting, I doubt it has an effect. It's just an idea.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 7 of 16, by vstrakh

User metadata
Rank Member
Rank
Member

On multiple occasions - with different cards and motherboards - I've seen "signal out of range" warning from monitor when switching between text and graphic modes. And it was always fixed by re-seating video card or changing the slot it was installed into.
This issue could be related. The bad connectivity in ISA/VLB slots leads to weird i/o requests as seen by the card.

Reply 8 of 16, by clb

User metadata
Rank Member
Rank
Member

Reading 3DAh is perfectly safe. I am currently programming different DOS graphics programs that also read for 3DAh and I've never observed any issues with that. You can go to https://github.com/juj/crt_terminator/tree/main/DOS/bin to quickly pull down some of them, e.g. PALANIM or SCROLL might be simple ones to test. Those are compiled with Borland Turbo C++ 3.0. Source code is provided alongside.

One thing that I do have noticed though when working with VGA to HDMI converters is one cheap converter that was losing signal depending on what the viewed content is on the screen. Typically this was due to having a single color screen output, the converter misses seeing any active image content there and thinks there is no signal.

So after the first kbhit, you are not only waiting for 3DAh, but you are also printing "ready again, hit a key.". Try to triple check that it is not the activity of printing that on the screen that causes the issue.

If that is not it, then I would look towards the compiler. I do understand that Open Watcom should be able to build real-mode programs as well, so maybe try compiling that code in real-mode to see if that affects anything, or if PALANIM and SCROLL above work, try pulling down a version of Borland Turbo C++ 3 and see how that compiles your test code to see if it behaves differently.

To my eye your code should be perfectly ok.

Reply 9 of 16, by Jo22

User metadata
Rank l33t++
Rank
l33t++
clb wrote on 2023-09-14, 08:48:

One thing that I do have noticed though when working with VGA to HDMI converters is one cheap converter that was losing signal depending on what the viewed content is on the screen. Typically this was due to having a single color screen output, the converter misses seeing any active image content there and thinks there is no signal.

Could it be that simple, the converter acts like an early/mid 90s era VGA monitor with energy-savings feature ?
It goes into standby mode, if VGA card stops sending sync or if a "blank" image is received, as with that Windows 3.1 screensaver ?

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 10 of 16, by clb

User metadata
Rank Member
Rank
Member

I am skeptical that it would be an intentional screen saver, but given the odd circumstances and quite clear cut looking code, that is good to clarify at least if nothing else.

Reply 11 of 16, by purple_toupee

User metadata
Rank Newbie
Rank
Newbie

This has been a ride. No, it's not working yet, but I have some data points, and a suspicion.

I finally used my period LCD DB15 monitor -- no change. So, the HDMI adapter is ruled out.

I tried adding delays in the repro code -- mostly to confirm whether reading 3DA (or printf) was triggering the blank. I confirmed that it was the read; I can print and delay, and observe the text; the moment the code reads 3DA, the display blanks.

I played around a lot with the repro code writing various bits of text and garbage to the framebuffer. Lo! It does make a difference. So far, it seems like the problem does NOT happen if, before I read 3DA, I do either:
* I write at least the first few rows of pixels (from the top), OR
* I read pixels. Any number of pixels (including one), from anywhere in the framebuffer. Just a single byte read. No writing is necessary. WTAF? How weird is this?

I was starting to feel better -- at least I have a workaround, right? -- when I discovered that at least one commercial game actually doesn't work right either. When I run Covert Action, if I start it up in VGA mode, it will eventually blank. (The first row of pixels is a little dark, and is also not solid; data point?)

Next, I tried the hardware angle -- because this is looking increasingly like something is not 100% working on an electrical level, to me. I unseated, applied compressed air, used an eraser on contacts, no luck. I tried a 2nd and then 3rd VLB slot. No luck. Though somewhere in here, I noticed that the Covert Action intro SOMETIMES gets further than others. It always blanks, but it stopped happening at a consistent place.

Finally, I read the manual for my motherboard -- something I should have done long ago, I know. I found that there are 3 VLB jumpers: 0/1 wait state, CPU <= 33MHz or CPU > 33MHz, and...oh my... "Special VL-Bus Setting: Short JP23 as below and then add a 150P capacitor on signal /BS16 when using the Genoa or other special V-L Bus VGA Card." WHAT?!

OK, first of all, this is wild, right? I love it. I had forgotten just how nutty hardware setup was back in the 80s and early 90s. I do wonder how anyone would know whether their particular VLB card was "special." Is this terminology that actually means something, or were the MSI tech writers huffing paint?

Anyway, mocking aside, you may recall that the card I am trying to make work is, in fact, a Genoa card. I assume BS16 is referring to a signal on the VLB. Perhaps this one?

LBS16: Local Bus Size 16. Used by slave device to indicate that it has a transfer width of only 16 bits.

source: https://allpinouts.org/pinouts/connectors/bus … vesa-local-bus/

I don't think I'm going to go solder a capacitor on it, though. I tried shorting that jumper to see if it made a difference, but it didn't.

SO. I have three theories. Curious what you all think too.

  1. There is an incompatibility between this specific card (Genoa 8500VL / Cirrus Logic CL-GD5426) and this specific motherboard (MSI MS-4132). The wacky JP23 description certainly hints at this possibility.
  2. This graphics card is broken. Maybe it was this way 30 years ago, or maybe something failed over time, but it's not working right now. (I bought this card a few weeks ago.)
  3. This motherboard has a flaw in its VLB circuits, perhaps something failing over time. (I have never used VLB on this motherboard before this.)

Fun!

Oh -- @clb, thanks for the link, I tried out your interesting utilities. All worked as expected, EXCEPT hscroll, which briefly displayed a green bar and then blanked. Whatever the problem is, it certainly does seem sensitive to the amount of data displayed in the first row.

Thanks everyone for the interesting discussion.

Reply 12 of 16, by Jo22

User metadata
Rank l33t++
Rank
l33t++

Maybe VLB/speed related, not sure. At some point, VL bus had electrical issues with noise, crosstalk and capacity/load.
At higher frequencies, the long traces and PCB layout did begin to show effect. So yeah, it might be related to the physical side here.
That's why 486DX50 systems could merely handle one single VLB card (no matter if it was a single or combined device).

Edit: /BS16 might be "bus size 16". See Re: Cirrus Logic GD5429 VLB significantly slower than a TSENG ET4000
Edit: I wonder what a) JP23 exactly does and b) in which way the 150 picofarad condenser should be installed exactly. As a filter cap ? As a "delay line" ? 🤷
I really wished manual writers wouldn't always write such one-liners. A bit of explanation would be great.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 13 of 16, by clb

User metadata
Rank Member
Rank
Member

Well that is quite peculiar all in all. Some detective journey you are on.

A different VLB motherboard + VLB graphics card combo might be helpful to try out, so you could cross-reference the motherboards with different VLB cards to rule out if it is that particular motherboard, or that particular card, or their intercompatibility. Might be simpler than starting to solder caps on the board - although probably more costly as well 😒

Reply 15 of 16, by mkarcher

User metadata
Rank l33t
Rank
l33t
purple_toupee wrote on 2023-09-15, 07:27:

"Special VL-Bus Setting: Short JP23 as below and then add a 150P capacitor on signal /BS16 when using the Genoa or other special V-L Bus VGA Card." WHAT?!

OK, first of all, this is wild, right? I love it. I had forgotten just how nutty hardware setup was back in the 80s and early 90s. I do wonder how anyone would know whether their particular VLB card was "special." Is this terminology that actually means something, or were the MSI tech writers huffing paint?

This is technobabble and most likely means: "When testing some VGA cards with our mainboard, we found a Genoa graphics card that is incompatible with our mainboard. Our hardware developers figured a hack that makes this card work. If you have some issues, you might try this hack, too."

As the hack involves the /BS16 signal, and Cirrus cards chips rely on the /BS16 signal to work, any incompatiblity of that mainboard with BS16-using cards might affect Cirrus cards, too. Adding a capacitor to the /LBS16 signal on the VL bus might be what they mean, but they might also mean the /BS16 pin of the 486 processor. Depending on the board, /LBS16 on the VL bus and /BS16 on the 486 may be directly connected, or have some termination/damping resistor in it. If there is a resistor, it might be relevant whether the capacitor is installed on the VL side or the host bus side, creating an R/C lowpass for signals originating from the other side. I would expect the 486-to-ISA-bridge to operate /BS16 as well, so adding a capacitor on that line might also interfere with ISA bus operation.

You are completely correct: This snippet of information is too short to make complete sense of it.

Reply 16 of 16, by purple_toupee

User metadata
Rank Newbie
Rank
Newbie

This has been a fun thread! I have one final update for you all.

Over the weekend, I used this issue as an excuse to locate and procure a multisync CRT. To no great surprise, it showed the same exact behavior.

But...the recycling center I went to also had a fantastic selection of motherboards, and I ended up picking up a PCI-based Pentium 75 motherboard instead. Sadly, this means we won't know for now whether the issue is with the card or the motherboard, but I'm betting the latter. My experience with 386 and 486 motherboards has been that they are remarkably eccentric and often sensitive to small changes; they like one SuperIO card but not another, one SIMM but not another, and so on. I think mkarcher has it -- there's just a weird incompatibility with this motherboard, and the engineers discussed it with the manual tech writer -- with comically bad results.

On the plus side, I have a beautiful CRT and a new project, so all's not lost. Thanks for the fun discussion all.