VOGONS


CRT Terminator Digital VGA Feature Card ISA DV1000

Topic actions

Reply 200 of 236, by clb

User metadata
Rank Oldbie
Rank
Oldbie
Nelson68k wrote on 2025-02-10, 08:55:
clb wrote on 2025-02-09, 23:42:

I got an impression that there may still have been some systems where CRT Terminator was not quite working out for you?

Correct. I will be happy to do more tests at the weekend. I haven't updated the firmware yet.

Great, fingers crossed that the root case will turn out to have been the same thing as in CircuitRewind's case.

Reply 201 of 236, by keenmaster486

User metadata
Rank l33t
Rank
l33t

@clb, is there a way to overclock the CRTT to get it to output 1440 line modes?

World's foremost 486 enjoyer.

Reply 202 of 236, by clb

User metadata
Rank Oldbie
Rank
Oldbie

Unfortunately 2560x1440 is beyond the reach of CRT Terminator. 2560x1440 @ 70Hz would require a pixel clock of about 270.9 MHz. The maximum rated pixel clock of the FPGA we use is 118.8 MHz, which we overclock up to about 150MHz. The gap is too large to bridge even with overclocking. 🙁

Reply 203 of 236, by keenmaster486

User metadata
Rank l33t
Rank
l33t

Hmm. Think you could get away with it at 1920x1440 60Hz?

Maybe if you cooled the FPGA?

World's foremost 486 enjoyer.

Reply 204 of 236, by jmarsh

User metadata
Rank Oldbie
Rank
Oldbie

If the output signal mode is HDMI rather than DVI (and I can understand why it wouldn't be - HDMI is more complex for little benefit if there's no audio) you could signal the output pixels as double-width. So 960x1440 pixels (which @70Hz would use a pixel clock of ~96MHz) would end up being rendered as 1920x1440. I'm not sure there's really any benefit with this though, rather than outputting a smaller image and letting the monitor upscale...

Reply 205 of 236, by keenmaster486

User metadata
Rank l33t
Rank
l33t

Well, my motivation for this is to get an output resolution higher than 1080 lines that a typical capture card can still recognize — unlike 1600x1200, which is a resolution not expected or supported by most modern capture cards, which are meant for gamers and are used to 1080, 1440, etc.

World's foremost 486 enjoyer.

Reply 206 of 236, by clb

User metadata
Rank Oldbie
Rank
Oldbie
keenmaster486 wrote on 2025-02-23, 07:51:

Well, my motivation for this is to get an output resolution higher than 1080 lines that a typical capture card can still recognize — unlike 1600x1200, which is a resolution not expected or supported by most modern capture cards, which are meant for gamers and are used to 1080, 1440, etc.

For picky HDMI capture cards, it is recommended to
- set output mode to 1920x1080 (DIP1.1-1.4 to 1011b) or 640x480 (DIP1.1-1.4 to 1000b)
- disable Multimode output (DIP2.1 to OFF)
- disable StutterStop (DIP2.2 and DIP2.3 to OFF)

Would that work in your use case?

This will result in CRT Terminator outputting the DMT or CVT-RBv2 standards compliant Full HD or 640x480 video modes (https://oummg.com/manual/#appendix_d_supporte … xed_resolutions), which should be the most compatible video modes possible.

If a HDMI capture card doesn't work with those settings, then I believe the issue is that HDMI capture card is not compatible with DVI-D video stream being sent over HDMI. For example the Elgato capture devices fall under this category, and are best to be avoided.

Reply 207 of 236, by keenmaster486

User metadata
Rank l33t
Rank
l33t
clb wrote on 2025-02-24, 11:17:
For picky HDMI capture cards, it is recommended to - set output mode to 1920x1080 (DIP1.1-1.4 to 1011b) or 640x480 (DIP1.1-1.4 […]
Show full quote
keenmaster486 wrote on 2025-02-23, 07:51:

Well, my motivation for this is to get an output resolution higher than 1080 lines that a typical capture card can still recognize — unlike 1600x1200, which is a resolution not expected or supported by most modern capture cards, which are meant for gamers and are used to 1080, 1440, etc.

For picky HDMI capture cards, it is recommended to
- set output mode to 1920x1080 (DIP1.1-1.4 to 1011b) or 640x480 (DIP1.1-1.4 to 1000b)
- disable Multimode output (DIP2.1 to OFF)
- disable StutterStop (DIP2.2 and DIP2.3 to OFF)

Would that work in your use case?

This will result in CRT Terminator outputting the DMT or CVT-RBv2 standards compliant Full HD or 640x480 video modes (https://oummg.com/manual/#appendix_d_supporte … xed_resolutions), which should be the most compatible video modes possible.

If a HDMI capture card doesn't work with those settings, then I believe the issue is that HDMI capture card is not compatible with DVI-D video stream being sent over HDMI. For example the Elgato capture devices fall under this category, and are best to be avoided.

My capture card works fine with 1920x1080 from the CRTT. 1600x1200 is what it doesn't like - it scans it as 800x600.

But I wasn't really talking about my capture card in particular, since I can probably find one somewhere that likes 1600x1200, but more about the theoretical typical capture card that works with 1920x1080 (and 2560x1440 if it's a 4K card) but won't accept 1600x1200. So if someone wants a higher resolution than 1920x1080, they're out of luck with 1600x1200, but 1440 might work.

But that's not particularly important. I'll try to find a capture card that accepts 1600x1200.

Right now I want to present some tests I've been doing with respect to the scaling and the behavior when switching video modes.

My apologies if any of this is fixed in the latest firmware version - I couldn't find anything about them in the changelogs and I haven't delved into updating the firmware yet... I'll be doing so on Linux and I'm a little worried about doing it wrong and bricking the card.

First, the mode switching: I have some videos if you want to see them, but I'm still experiencing the "blip" when switching DOS video modes, despite the output mode remaining the same.

Example:
- I have the output mode set to 1600x1200x60Hz. Multimode off, Stutterstop off, vsync off (fixed 60Hz). This mode does not change throughout the test, and I can verify it by checking the info on my monitor. I'm using a monitor only, no capture card.
- I'm in DOS text mode
- I change the DOS video mode to any graphics mode
- The monitor blanks for 1-2 seconds as though it has detected a mode change
- But there has not been a mode change. The mode is still 1600x1200x60Hz.
- I change the DOS video mode back to text mode
- There is no "blip" this time. The modeswitch happens instantaneously and my monitor does not see an interruption in the 1600x1200x60Hz signal.

So I'm seeing a blip going from text->graphics, but not from graphics->text.

Note that this happens with any fixed output resolution. I'm just using 1600x1200 as an example. It also happens at 1920x1080 (or any other resolution) on my capture card.

Getting rid of the blip is important to me as it produces the "CRT-like" experience that I'm looking for, especially as a programmer, frequently switching between text and graphics modes and wanting to see the output right away rather than missing those often crucial first 1-2 seconds.

Enabling Multimode makes the blip problem worse, since there is an output mode change corresponding to every DOS mode change. But that's the nature of Multimode, of course.

Next, the scaling.

First off, enabling Multimode makes the scaling "perfect". But Multimode has two main problems:

1. My LCD monitor does not let me force the aspect ratio to 4:3. It only lets me set it to "original", which means it preserves the original aspect ratio of the mode rather than stretching it to fill the screen. This means that many of the Multimode modes get scanned at their native non-4:3 aspect ratios. For example text modes come in as 1440x1200 in Multimode, and consequently appear stretched vertically on my screen, despite having perfect scaling.
2. Multimode makes the "blip" when changing video modes unavoidable.

If I fix the resolution to 1600x1200 (or 1080 line modes), the scaling for text modes and 320x200 is still in the "good" realm, which is worlds better than any other scaling solution on the market.

However, the scaling breaks down at 640x200, 640x350, and 640x480 (and, I assume, higher resolutions, but I didn't try those).

The scaling is best in 320x200. Text mode is alright - some wonkiness though. Again, Multimode "fixes" this, but the aspect ratio is no longer 4:3, so it stretches vertically on my monitor (not to mention the modeswitch issue).

Apologies if I'm touching on something you've already tried to accomplish, but can't for some reason due to hardware limitations.

I can't capture 1600x1200 with my capture card, so I'm using my phone to show pixels close up on my monitor. The CRTT output mode is 1600x1200, Multimode off.

I'm using a program I wrote called GRIDTEST - attached at the bottom of this post - that displays a grid of alternating pixels in your choice of the common DOS video modes, perfect for this kind of test.

As my "reference" example of scaling these DOS resolutions to 1600x1200, I'm including screenshots of DOSBox running with a fixed output resolution of 1600x1200 in windowed mode, set to use OpenGL rendering.

The DOSBox screenshots show what "should" be possible when scaling to a fixed resolution instead of a dynamically changing one, assuming there isn't some limitation with CRTT I don't know about.

320x200 comparison between CRTT and DOSBox (looks almost identical - here the CRTT is actually doing a little better than DOSBox as it fudges the pixels slightly to make them more uniform without sacrificing almost any sharpness):

The attachment crtt_320x200_1600x1200.jpg is no longer available
The attachment dosbox_320x200_1600x1200.png is no longer available

640x200 comparison between CRTT and DOSBox (here the difference becomes stark, DOSBox is the one fudging the pixels for uniformity now, making them as sharp as possible without making them uneven):

The attachment crtt_640x200_1600x1200.jpg is no longer available
The attachment dosbox_640x200_1600x1200.png is no longer available

I have to add more posts to the thread to show the rest of them since it only lets me attach five images per post.

GRIDTEST program:

The attachment GRIDTEST.EXE is no longer available
Last edited by keenmaster486 on 2025-02-27, 18:11. Edited 4 times in total.

World's foremost 486 enjoyer.

Reply 208 of 236, by keenmaster486

User metadata
Rank l33t
Rank
l33t

640x350 comparison between CRTT and DOSBox:

The attachment crtt_640x350_1600x1200.jpg is no longer available
The attachment dosbox_640x350_1600x1200.png is no longer available

640x480 comparison between CRTT and DOSBox:

The attachment crtt_640x480_1600x1200.jpg is no longer available
The attachment dosbox_640x480_1600x1200.png is no longer available

World's foremost 486 enjoyer.

Reply 209 of 236, by keenmaster486

User metadata
Rank l33t
Rank
l33t

And as a bonus here are the same tests done with the overscan border turned on, which makes the situation a little worse.

And I have one more question - is there a way to make the overscan border persistent? I can turn it on with the CRTT tool, but it goes away the next time I power cycle the machine.

The attachment crtt_320x200_1600x1200_bordered.jpg is no longer available
The attachment crtt_640x200_1600x1200_bordered.jpg is no longer available
The attachment crtt_640x350_1600x1200_bordered.jpg is no longer available
The attachment crtt_640x480_1600x1200_bordered.jpg is no longer available

World's foremost 486 enjoyer.

Reply 210 of 236, by keenmaster486

User metadata
Rank l33t
Rank
l33t

Another update: The CLGD tool works on my CL-GD5428 card. So I have hardware pallette updates without needing PALTSR. Certain demos actually work now.

So it worked on 5428, but not on 5426. I wonder if it's the chip or something to do with the design of the card that carries the chip.

World's foremost 486 enjoyer.

Reply 211 of 236, by keenmaster486

User metadata
Rank l33t
Rank
l33t

I have just thought of something else.

Probably about half of widescreen LCD monitors have the ability to display the image in its original aspect ratio, or perhaps force it to 4:3.

It might be useful for the CRTT to have an optional mode to output a 4:3 image centered in the middle of a wider output resolution, to handle the aspect ratio correction itself, especially moving into the future when it is not guaranteed that the average monitor 10 years from now for example is going to have the ability to display a 4:3 aspect ratio properly.

World's foremost 486 enjoyer.

Reply 212 of 236, by keenmaster486

User metadata
Rank l33t
Rank
l33t

More testing.

These screenshots were captured with my EVGA XR-1 Lite, scanning 1440x1080 output from the CRTT as 1920x1080, upscaled to 1920x1440 by OBS.

I chose 1440x1080 so that I can get the correct aspect ratio on both my monitor and the capture card at the same time.

These are better results than I have gotten from any other VGA capture methods. Previously the best I could do was the OSSC, and its output was rather fuzzy no matter what I did.

The scaling, however, I think could still use some fudging of pixels to make them look more uniform. It suffers somewhat from unevenness. It's not as noticeable except in games like this one that frequently use grid patterns for dithering.

The first screenshot is with "point scaling" turned on in OBS. The second is with the default scaling.

The attachment crtt_320x200_1440x1080_bordered_capture.png is no longer available
The attachment crtt_320x200_1440x1080_bordered_capture_2.png is no longer available

World's foremost 486 enjoyer.

Reply 213 of 236, by clb

User metadata
Rank Oldbie
Rank
Oldbie

Heya, thank you so much for all the detailed feedback. This has been a very interesting read. Let me try to unpack my thoughts one at a time below.

keenmaster486 wrote on 2025-02-27, 18:01:

My capture card works fine with 1920x1080 from the CRTT. 1600x1200 is what it doesn't like - it scans it as 800x600.

Hmm this is indeed a bit annoying. I wonder how that happens, and if that is common. Was this on the EVGA XR1 Lite, or some other device? I think I should have an EVGA XR1 to test - capture devices should definitely not misdetect or guess the resolution, but they can precisely and exactly count the pixels and scanlines, so they should never miss the correct resolution. (only caveat I know is that some devices require that the width of a resolution is a multiple of 2 or 4 pixels, which is common with all these resolutions)

If you find other devices that do this, it would be interesting to know. I'll try to see if I can reproduce this issue, to see if it can be worked around.

keenmaster486 wrote on 2025-02-27, 18:01:
First, the mode switching: I have some videos if you want to see them, but I'm still experiencing the "blip" when switching DOS […]
Show full quote

First, the mode switching: I have some videos if you want to see them, but I'm still experiencing the "blip" when switching DOS video modes, despite the output mode remaining the same.

Example:
- I have the output mode set to 1600x1200x60Hz. Multimode off, Stutterstop off, vsync off (fixed 60Hz). This mode does not change throughout the test, and I can verify it by checking the info on my monitor. I'm using a monitor only, no capture card.
- I'm in DOS text mode
- I change the DOS video mode to any graphics mode
- The monitor blanks for 1-2 seconds as though it has detected a mode change
- But there has not been a mode change. The mode is still 1600x1200x60Hz.
- I change the DOS video mode back to text mode
- There is no "blip" this time. The modeswitch happens instantaneously and my monitor does not see an interruption in the 1600x1200x60Hz signal.

So I'm seeing a blip going from text->graphics, but not from graphics->text.

I troubleshooted this issue today, and I was able to reproduce and fix this. This is definitely a bug in CRT Terminator firmware.

What is happening here is that when a video mode is changed, the video mode/resolution counters on the PC VGA card are reset, i.e. the new video mode on the PC starts clocking out the new video mode abruptly, unsynchronized with the previous video mode. This will generally cause a vsync pulse to be emitted at a random time with respect to the old video mode scanout.

CRT Terminator's interlaced video mode detection misinterpreted this unexpected vsync pulse timing, and for a duration of a single frame, misunderstood that an interlaced video mode is being activated. Due to internal memory framebuffer bandwidth limitations, there are specific restrictions to interlaced video modes. This resulted in CRT Terminator thinking it needed to switch the output video mode, and as result the the output video mode briefly lost sync. The very next frame, CRT Terminator would then realize that the current video mode is not interlaced at all, and it would resume the normal operation.

I have authored a fix, which will go through testing and verification. I'll push an update out as soon as that is done. Thanks for reporting this issue! If you find any other video sync losses when MultiMode and StutterStop are disabled, please let me know.

keenmaster486 wrote on 2025-02-27, 18:01:

My LCD monitor does not let me force the aspect ratio to 4:3. It only lets me set it to "original", which means it preserves the original aspect ratio of the mode rather than stretching it to fill the screen. This means that many of the Multimode modes get scanned at their native non-4:3 aspect ratios. For example text modes come in as 1440x1200 in Multimode, and consequently appear stretched vertically on my screen, despite having perfect scaling.

This is a great observation, and something that did turn up in our design process. Letterboxing directly in CRT Terminator is something that we did consider, but were compelled to turn down, because the focus was to develop that "holy grail" pixel-perfect and no-frame-skip support for 1600x1200 @ 70Hz.

As data points:
- The FPGA is rated up to 118.8 MHz video pixel clock.
- Displaying 1600x1200 @ 60 Hz @ CVT-RBv2 requires a 124.2 MHz pixel clock. That is just a tiny amount of overclocking over 118.8 MHz, which we found to practically always work.
- Displaying 1600x1200 as letter-boxed 1920x1200 @ 60 Hz @ CVT-RBv2, requires a 148.5 MHz pixel clock. This is a larger overclock that the FPGA vendor has not guaranteed.

I.e. adding extra black pixels to letter-box, would take the FPGA pixel clock further into the overclock domain, so we did not go into that direction, thinking it is better to choose a monitor that has the 4:3 support built-in. This way the video pixel "overclock bandwidth" could be allocated towards getting up to 70Hz.

Another challenge is that the video upscaler process is the "tightest"/highest performance loop construct in the FPGA. The process runs in the highest video pixel clock speed, e.g. that 124.2 MHz or 148.5 MHz. The internal operation of the FPGA itself works up to about 180-200MHz. So there is very little headroom to run any logic at per-pixel domain. A single if() branch in that area already is critical overhead in that per-pixel domain.

Originally we struggled with these high resolutions - and eventually ended up purchasing Gowin's highest binned speed grade FPGA for the particular SKU that we are using, in order to stretch the limits. Though the FPGA is still a ~200 MHz max part. In order to be able to produce more complex upscaling logic, a 300 or 400 MHz switching speed part would be needed, but the price points then creep into several hundreds more, like the new OSSCs and RetroTinks are showing the way.

I did try to squeeze in letterboxing, but unfortunately was not able to make it without it resulting in tradeoffs to the max video pixel clock speeds. I can try to give this another go in the future, given that since the first attempt, we have spent time developing a new timing closure optimizer, but I cannot make any promises on this front unfortunately. For best image quality, MultiMode is the recommended path.

keenmaster486 wrote on 2025-02-27, 18:01:

GRIDTEST program:

The attachment GRIDTEST.EXE is no longer available

Thanks for this test program. I gave this a run at 640x350 outputting to fixed 1600x1200, and capturing in OBS. The output pattern here looks correct and as expected:

The attachment fixed_resolution_upscaled_pixel_grid.png is no longer available

Horizontally, 1600 / 640 = 2.5, so 50% of source pixels should upscale to 2 pixels, and 50% of them should upscale to 3 pixels. So the upscaling will alternate each pixel between 2,3,2,3,2,3,2... upscaling pattern.
Vertically, 1200 / 350 = 3.428571428571429, so 42.8% of source pixels should upscale to 4 pixels, and 57.1% of source pixels should upscale to 3 pixels. So the upscaling will alternate between 3 and 4 pixels in almost 50-50 manner.

I agree that the result is not great for dithered patterns, although there is no good strategy to my knowledge to improve dithers, that would not cause more distortion to the image.

In your camera photos, I can see an effect that looks like double scaling has taken place. CRT Terminator upscaled 640x350 up to 1600x1200, and then maybe the display was a 1920x1080?, so it had to vertically shrink the upscaled pixel grid a little bit, and horizontally extend? Try outputting 1920x1080 directly from CRT Terminator in that case, so that the display will not need to apply further scaling to the CRT Terminator scaled image. Maybe that might give a bit better output.

keenmaster486 wrote on 2025-02-27, 18:07:

And I have one more question - is there a way to make the overscan border persistent? I can turn it on with the CRTT tool, but it goes away the next time I power cycle the machine.

CRT Terminator settings do not persist over power cycling. The idea with CRTT.EXE is to make the tool small so that it could be placed in AUTOEXEC.BAT for persisting over boots. Would that work?

keenmaster486 wrote on 2025-02-28, 23:58:

Another update: The CLGD tool works on my CL-GD5428 card. So I have hardware pallette updates without needing PALTSR. Certain demos actually work now.

So it worked on 5428, but not on 5426. I wonder if it's the chip or something to do with the design of the card that carries the chip.

That's really interesting to hear. I am puzzled as well, not sure what the detail with different CL variants here might be.

keenmaster486 wrote on 2025-03-01, 04:22:

I have just thought of something else.

Probably about half of widescreen LCD monitors have the ability to display the image in its original aspect ratio, or perhaps force it to 4:3.

It might be useful for the CRTT to have an optional mode to output a 4:3 image centered in the middle of a wider output resolution, to handle the aspect ratio correction itself, especially moving into the future when it is not guaranteed that the average monitor 10 years from now for example is going to have the ability to display a 4:3 aspect ratio properly.

This sounds like the letterboxing idea above? I'll give this a possibility in the future, to see if there is enough timing headroom to make it possible. Though it is not certain if this will be feasible to implement.

Reply 214 of 236, by keenmaster486

User metadata
Rank l33t
Rank
l33t
clb wrote on 2025-03-04, 13:08:

Was this on the EVGA XR1 Lite, or some other device?

Yep this was the XR1 Lite. I'm capturing with OBS - maybe that has something to do with it, as the only resolutions I can "force" are common "modern" resolutions like 1920x1080 and 1280x720. But whether I set the resolution explicitly in OBS or not, it still scans 1600x1200 as 800x600. It's possible that 1200 lines is simply out of range for this card, as it's supposedly 1080p maximum.

clb wrote on 2025-03-04, 13:08:

I troubleshooted this issue today, and I was able to reproduce and fix this. This is definitely a bug in CRT Terminator firmware.

Thank you so much for looking into this and fixing it. Very interesting what the problem turned out to be.

clb wrote on 2025-03-04, 13:08:

This is a great observation, and something that did turn up in our design process. Letterboxing directly in CRT Terminator is something that we did consider, but were compelled to turn down, because the focus was to develop that "holy grail" pixel-perfect and no-frame-skip support for 1600x1200 @ 70Hz.

Perhaps for "letterboxing", a reasonable native 4:3 resolution could be chosen, and the scaling worked out from there?

clb wrote on 2025-03-04, 13:08:
Thanks for this test program. I gave this a run at 640x350 outputting to fixed 1600x1200, and capturing in OBS. The output patte […]
Show full quote

Thanks for this test program. I gave this a run at 640x350 outputting to fixed 1600x1200, and capturing in OBS. The output pattern here looks correct and as expected:

Filenamefixed_resolution_upscaled_pixel_grid.png File size77.03 KiBViews36 viewsFile license
Public domain
Horizontally, 1600 / 640 = 2.5, so 50% of source pixels should upscale to 2 pixels, and 50% of them should upscale to 3 pixels. So the upscaling will alternate each pixel between 2,3,2,3,2,3,2... upscaling pattern.
Vertically, 1200 / 350 = 3.428571428571429, so 42.8% of source pixels should upscale to 4 pixels, and 57.1% of source pixels should upscale to 3 pixels. So the upscaling will alternate between 3 and 4 pixels in almost 50-50 manner.

I agree that the result is not great for dithered patterns, although there is no good strategy to my knowledge to improve dithers, that would not cause more distortion to the image.

Correct, this is exactly the pattern I'm seeing - and I agree it's the best solution if you want to keep perfect sharpness on the pixels.

However, I think it could benefit from just a *little* bit of "fudging" to make the pixels look more even (for these higher resolutions and for text mode, not for 320x200 upscaled to 1600x1200! But 320x200 upscaled to, say, 1440x1080 with overscan border could also benefit from this, I've noticed).

Here's a detail view of how DOSBox, for example, handles 640x350 upscaled to 1600x1200:

The attachment dosbox_640x350_1600x1200_detail.png is no longer available

I'm not really sure what this type of scaling is called. But it seems to involve shading a little around the edges of the pixels so that they appear perfectly even to the eye when viewed at 1x.

To my eye this is a good compromise between the ultra-fuzzy, brain dead scaling that the typical LCD monitor or VGA to HDMI converter does, and perfectly sharp pixels at the cost of evenness.

I understand the implementation of this algorithm, if you even wanted to attempt it, might cost some FPGA time. For example maybe this means enabling it moves 1600x1200 into the realm of "might work" and 1440x1080 becomes the new "probably stable" resolution. To me personally that would be an acceptable tradeoff, especially if it were an optional thing.

You can see how nice it looks if you set DOSBox to opengl output (not openglnb), a fixed windowed resolution of 1600x1200 (or even 1440x1080), and run GRIDTEST with 640x200, 640x350, or 640x480.

Setting the DOSBox output to openglnb results in the same behavior as CRTT. This is the scaling mode I used to use, but then I saw the light of having a little bit of fudging for even pixels.

I might even say that this is especially important for text mode, where perfectly sharp scaling usually results in jagged, uneven letters that don't look great.

This way it also looks very much like the actual output from a CRT monitor, which is almost never perfectly sharp at these higher resolutions, especially in the early 90s. But 320x200 will be much closer to "perfectly sharp".

Another way of looking at it is that a CRT will *never* have uneven pixels. It will have varying degrees of sharpness depending on the quality of the tube and circuitry, but the individual pixels will never be uneven.

clb wrote on 2025-03-04, 13:08:

In your camera photos, I can see an effect that looks like double scaling has taken place.

Yes, I am using a 2560x1440 monitor, so it upscales a little. I should have set it to "just scan" mode, which doesn't scale at all, so the camera photos would look more accurate. But the effect is the same either way. The monitor's scaling from 1200->1440 affects the way it looks only slightly.

clb wrote on 2025-03-04, 13:08:

CRT Terminator settings do not persist over power cycling. The idea with CRTT.EXE is to make the tool small so that it could be placed in AUTOEXEC.BAT for persisting over boots. Would that work?

Yeah, that's what I ended up doing. I think that's fine. I suppose if you wanted the settings to persist it would need some flash memory or a backup battery 🤣.

clb wrote on 2025-03-04, 13:08:

This sounds like the letterboxing idea above? I'll give this a possibility in the future, to see if there is enough timing headroom to make it possible. Though it is not certain if this will be feasible to implement.

Hmm, well, maybe it could just be an optional feature that is not guaranteed to not break higher output resolutions.

Thank you again for all of your hard work on this project. It really is one of the most promising developments in vintage PC computing that I've seen in a long time.

World's foremost 486 enjoyer.

Reply 215 of 236, by clb

User metadata
Rank Oldbie
Rank
Oldbie
keenmaster486 wrote on 2025-03-04, 18:47:

Here's a detail view of how DOSBox, for example, handles 640x350 upscaled to 1600x1200:

The attachment dosbox_640x350_1600x1200_detail.png is no longer available

I'm not really sure what this type of scaling is called. But it seems to involve shading a little around the edges of the pixels so that they appear perfectly even to the eye when viewed at 1x.

This looks like the "Area upsampling" mode, which CRT Terminator achieves with the Multimode operation, and OBS does with the "Area" scaling mode.

I took captures of GRIDTEST in the different modes:

The attachment upscaling_modes.png is no longer available

The mobile phone camera capture is a bit hard to compare against the others, though visually it does look effectively the same as the DOSBox and the OBS captures.

I understand you are asking to get the visual quality of MultiMode filtering mode, without the use of MultiMode. If it were possible, I would love to be able to implement this scaling directly in the FPGA without the use of the MultiMode strategy, though unfortunately it was a necessary compromise to rely on the LCD display's upscaling circuitry to implement this scaling - there was not enough performance available otherwise. 🙁

Reply 216 of 236, by keenmaster486

User metadata
Rank l33t
Rank
l33t
clb wrote on 2025-03-04, 22:33:

This looks like the "Area upsampling" mode, which CRT Terminator achieves with the Multimode operation, and OBS does with the "Area" scaling mode.

Yes, looks like that's exactly what it's doing.

clb wrote on 2025-03-04, 22:33:

I understand you are asking to get the visual quality of MultiMode filtering mode, without the use of MultiMode. If it were possible, I would love to be able to implement this scaling directly in the FPGA without the use of the MultiMode strategy, though unfortunately it was a necessary compromise to rely on the LCD display's upscaling circuitry to implement this scaling - there was not enough performance available otherwise. 🙁

Yep, that's what I'm hoping for.

That is unfortunate. Is it not even possible to do something that shades the pixels on the edges even a little? If you're able to handle 800x600 75Hz, then could you do a little more per pixel on lower resolutions for example? Or does it just not work that way? Forgive my ignorance as to the difficulty of programming an FPGA. I understand you're working at the limit of the hardware here.

If it's a matter of banging your head against the wall for hours until you come up with something that works and you just don't want to do that, I'd be happy to try my hand at it.

World's foremost 486 enjoyer.

Reply 217 of 236, by jmarsh

User metadata
Rank Oldbie
Rank
Oldbie
keenmaster486 wrote on 2025-03-04, 18:47:

Here's a detail view of how DOSBox, for example, handles 640x350 upscaled to 1600x1200:

The attachment dosbox_640x350_1600x1200_detail.png is no longer available

I'm not really sure what this type of scaling is called. But it seems to involve shading a little around the edges of the pixels so that they appear perfectly even to the eye when viewed at 1x.

That was added when support for opengl shaders was implemented, unsurprisingly I named the shader "sharp" because I couldn't think of anything more fitting. It uses no blending except for the rows and columns of output pixels that straddle input pixel boundaries, where it uses bilinear. In cases where the input size is an exact integer multiple of the output size the output should be identical to nearest-neighbour.
Even though it's implemented using a shader in DOSBox it's not too difficult to implement in software with no more cost than a regular bilinear transform.

Reply 218 of 236, by clb

User metadata
Rank Oldbie
Rank
Oldbie
jmarsh wrote on 2025-03-05, 05:51:
keenmaster486 wrote on 2025-03-04, 18:47:

Here's a detail view of how DOSBox, for example, handles 640x350 upscaled to 1600x1200:

The attachment dosbox_640x350_1600x1200_detail.png is no longer available

I'm not really sure what this type of scaling is called. But it seems to involve shading a little around the edges of the pixels so that they appear perfectly even to the eye when viewed at 1x.

That was added when support for opengl shaders was implemented, unsurprisingly I named the shader "sharp" because I couldn't think of anything more fitting. It uses no blending except for the rows and columns of output pixels that straddle input pixel boundaries, where it uses bilinear. In cases where the input size is an exact integer multiple of the output size the output should be identical to nearest-neighbour.
Even though it's implemented using a shader in DOSBox it's not too difficult to implement in software with no more cost than a regular bilinear transform.

Cool to hear that you authored that shader. Yeah, what you describe is the "Surface Area Upsampling" algorithm, where the pixels are thought to be rectangles (rather than points in a 2D function space like in traditional bilinear filtering), and the output pixel grid is overlaid on top of the input pixel grid, and those output pixels that straddle the input pixel edges, get a % proportion contribution from the pixels that they overlap. In practical terms, this is closely the same result that happens with CRT Terminator with use of MultiMode, where CRT Terminator upscales the image by an integer factor, and the output display then bilinearly scales that image to fit the screen.

Implementing bilinear filtering and/or surface area sampling in software is easy and straightforward. It requires six multiplications and three additions per color component, so 18 multiplications and 9 additions total (if we ignore gamma correctness here, let's not get too ambitious). For an FPGA, it is a completely different beast. For an FPGA, each individual multiplication at the high frequency domain effectively requires the use of dedicated multiplier and adder units inside the FPGA - attempting to perform those arithmetic computations in "generic flip-flops", as it is said, will dramatically limit max pixel clock and increase space usage. And this is before even dealing with floorplan and routing constraints. To put this in context, the whole FPGA has 24 hardware multiplier units, which are all in use already - adding a single multiplication or addition into the upscaler is already a major challenge.

There might be ways to replace some of the multiplications with piecewise approximations and shifts, at the expense of image quality, but that requires more floorplan. CRT Terminator is packed full, currently using about 96% of the FPGA floorplan resources. The current integer scaler subsystem takes up about 31% of that floorplan, so we just do not have any physical room left to implement a surface area scaler. 🙁

keenmaster486 wrote on 2025-03-04, 23:08:

If it's a matter of banging your head against the wall for hours until you come up with something that works and you just don't want to do that, I'd be happy to try my hand at it.

This is definitely not a matter of "I don't want to do it", believe me, I would love to implement this in the FPGA, and would do it in a split second to accommodate this request if I could, but it requires a more powerful and larger FPGA to make it all fit.

Originally when we designed CRT Terminator with its Multimode feature as a way to achieve this desirable "surface area sampling" image quality, we thought that since the 1-2 second video resync effect that it brings is fundamentally unavoidable anyways, (since each VGA video mode has different refresh rate that necessitates a resync anyways to get the "no frameskipping or stuttering" result, which we thought would be a non-starter for users if frameskip/stuttering occurred) that adjusting the resolution for the MultiMode strategy is an ok way to achieve the best image quality, as it can happen "for free" at that resync.

Unfortunately we just did not anticipate this other ordering of preferences, where giving up no-frameskip/stuttering would be more preferable to get immediate video mode changes, while (naturally) not wanting to sacrifice any image quality when making that tradeoff. 🙁 In hindsight I understand this request - it is just that the development of CRT Terminator was very much led by the aim to solve the pixel-perfect no-stuttering 320x200 -> 1600x1200 upscaling "utopia" that so many threads in Vogons had repeatedly called out to be impossible.

Reply 219 of 236, by jmarsh

User metadata
Rank Oldbie
Rank
Oldbie

I guess if it's 96% used, you definitely don't have room for a 65536 (256x256) byte LUT to avoid multiplying...