VOGONS


First post, by Dusko

User metadata
Rank Member
Rank
Member

Hi All!
I'll go straight to the question and elaborate after: what are the best settings on the RetroTINK-4K and OBS Studio for capturing a PC’s VGA output?

Two VERY important things to consider:
1- It doesn't make sense to use overkill settings just because you can. If they won't actually improve anything, there's no point in using them.
2- If you're going to suggest something, please explain why, so I (we) learn something.

So the whole idea is to make sure I'm getting the best possible image when recording from a VGA PC output, and I don't want to miss a setting that could improve my current config.

I got a RetroTink 4K several years ago (I think it was the second batch). Yes, it was a hardball to my wallet, but I don't regret it. Before that, I used several scalers: Extron, Geffen, Kramer, etc. My favorite of the bunch was the Geffen (VGA to HDMI), very solid.

I have two capture devices, a Nearstream CCD10 and a EazyCap Gamedock Ultra. Both are fine for what I need. A few days ago I pulled the trigger on a Magewell HDMI to USB 3.0 (gen 2) I found fairly cheap on eBay (it hasn't arrive yet). From what I've heard, it's more on the "Pro" side.

So the setup will be like this:
PC VGA -> RetroTink 4K -> Magewell HDMI to USB 3.0 -> OBS (Main PC) -> Editing (final video)

For editing I'm using Davinci Resolve Studio -> mp4 H.264 or AV1. 've been using AV1 for several months now and "I think" it's just fine.

One thing that came up while researching is the 4:4:4 compression / sampling (or whatever the correct term is), but I'm sure that's ridiculous for what I need, not to mention the file sizes that it will generate. Again, this is complete nonsense for my use case. ChatGPT got me into thinking that's what I need, or is it?
This is a good video that explains it: https://youtu.be/0Mds4-ggpNI

From what I understand so far:

- Keep the same resolution across the HDMI chain (1080p is what I use)
- If you are aiming for 4:4:4 and your gear supports it, you should avoid filters in OBS because they may internally process at lower chroma.
- Basically, the chain should stay as clean and untouched as possible.

Don't take my word for it, that's just my current understanding. And again, I seriously doubt I’ll be doing 4:4:4 anytime soon (probably never). <- Wrong.

As usual, the weakest component in the chain will limit everything. In this case we’re talking about five components total.

And then there’s the elephant in the room: YouTube.

Besides archiving, the goal is to upload to YouTube. I know they compress everything, but generally speaking, the better your source, the better it should look after their compression.

My setup is as follows:

RetroTink 4K most commonly used config:

VGA input: RGBHV
HDR: Off
Colorimetry: Auto (Rec. 709)
RGB Range: Full
Sync Lock: Triple Buffer
Output Res: 1080p @60
Autocrop: Full to 4:3
Scaling: Auto Fill
Buffer Length: Min. Lag
Interpolation (Vertical and Horizontal): Bilinear Sharp
Anti-Ringing: On
Linear Light: On
Transfer Function: sRGB
Deinterlacing: Weave (I just realized I've been using Motion Adaptive)

Any other setting that I'm not mentioning is either off or it has some default/auto values.
Notes:
I do Auto Phase and Gain calibration as needed.
I don't use any FXs or CTRs simulation.

My OBS config:

Recording: MP4, Nvidia Nvenc AV1
Rescale output: Disabled
Encoder: CQP 18
Keyframe Interval: 2s
Preset: P7 Slowest (Best Quality)
Tuning: High Quality
Multipass: 2 passes
Look-ahead: On
Adaptive Quantization: On
B-Frames: 2
B-Frame as reference: Disabled
Advanced:
Process Priority: Above Normal
Renderer: Direct3D 11 (only option)
Color Format: BGRA (8-bit)
Color Space: Rec. 709
Color Range: Full
SDR White Level: 300 nits
HDR Nominal Peak Level: 1000 nits (I'm using HDR anyway)

So… what would you guys recommend for RetroTINK-4K and OBS settings for VGA capture (and why?)

Thanks!!

Last edited by Dusko on 2026-03-09, 04:54. Edited 3 times in total.

Retro PC games channel: http://www.youtube.com/@myRetroPC
Electronics, mods and tools channel: https://www.youtube.com/@RetroRust-75
Reddit: https://www.reddit.com/r/MyRetroPC
FaceBook: https://www.facebook.com/myretropc

Reply 1 of 9, by jh80

User metadata
Rank Newbie
Rank
Newbie

You're saying two different things: 1) that you want the best possible image for archiving purposes, and 2) that you don't care about compression and will be uploading to YouTube.

It sounds like you're really focusing on (2).

If you want (1), then you definitely want pixel-level accuracy. That means 4:4:4 chroma and maintaining original resolution and refresh rate.

I'm just bringing this up because you're going to get drastically different responses based on use-case.

I don't have a RetroTink 4K so I can't respond directly, but I'm curious as to what people say about it.

Reply 2 of 9, by Dusko

User metadata
Rank Member
Rank
Member

Hmm, sorry if that was confusing. What I’m trying to say is that I want to get the best possible image quality without going overboard or maxing out all the settings just because I can.

I might be wrong in saying that 4:4:4 is overkill. What if it isn’t? That’s actually one of the the main points: I’m not sure whether it makes sense for my specific use case.

Retro PC games channel: http://www.youtube.com/@myRetroPC
Electronics, mods and tools channel: https://www.youtube.com/@RetroRust-75
Reddit: https://www.reddit.com/r/MyRetroPC
FaceBook: https://www.facebook.com/myretropc

Reply 3 of 9, by Kordanor

User metadata
Rank Member
Rank
Member

I am using a Voodoo 3 for output, Retrotink4k and then the hdmi/dvi signal goes into the datapath4k.

What I am doing on the tink4k:
I upscale everything into a 1080p 60 Hz Signal. On PC skipped frames are not as much of a "problem" as on console or Homecomputer where 50Hz often mean 50 Frames for example. So I don't mind losing a few frames in DOS and you can't put 70FPS/Hz videos on youtube anyways.
The resolution is not maxed out. I ALWAYS do pixel perfect scaling unless the resolution goes too high and it's not possible anymore.
This means I deal in the Pixels as custom resolution and set scaling to integer.

320x200 -> 1280x1000, yes, thats not 4:3 but very close and better than not pixel perfect scaling (big caveat, see below)
640x400 -> 1280x1000, yes, every vertical second pixel is becoming 3 instead of 2 pixels, but again, it's better than the alternative. But this might depend on the game.
640x480 -> 1280x960
800x600 -> From here on, you get into trouble and you will need to use different scaling methods because pixel perfect scaling will not yield decent results anymore

For OBS:
It is essential to "capture" the signal in 4X4X4, even if you just do this for youtube and don't save the actual video in maximum quality. I used a different way of capturing without the RGB part and especially in DOS Navigator with red font on Blue background it looked horrific. So make sure to take in the signal as 4x4x4.

In order to get the best quality on youtube, you should set OBS to 4k, even if nothing you use is 4k. Just Use OBS to 4k, and upscale everything. Youtube will use better renderer and overall better quality for 4k and it even reflects on 1080p footage. Though some of that info might not be up to date anymore.

Anyways, would recommend to set OBS to 4K. Save videos as 4k. I don't even have a 4k Screen. But it comes with a positive side, because you also see what happens if you do it wrong:
The footage you get in your 1080p capture signal should be doubled, resulting in 4k ofc (black borders and your 1280x1000 will turn to 2560x2000 within the 4k signal). The option to double it in OBS is called "point". Area is second best, but Point is pixel perfect duplication (actually you might want to fiddle around with area with these 800x600 signals and up, but I didnt experiment with that in OBS myself)

If you moved this by one pixel. You will see a massive change in the output on a 1080k screen, because now when downscaled, you would "mix" different pixels. (e.g. now its not Pixel1 Red, Pixel 2 Red, Pixel 3 Blue, Pixel 4 blue "downscaled to Red and Blue. Instead you would get some blurry mess where Pixel 2 and Pixel 3 are mixed. I see that in my OBS preview clearly if I move that by just one pixel.

And while I didnt test that with 4k recordings, my guess is that, that this would also affect something like 800x600 if you thought like "ok, but for 4k I could just triple it". It will probably look decent on 4k, but (probably) bad if anyone watches it on 1080p. But this is speculation on my end.

The Big Caveat for 320x200
Now...there is one issue. And this is why all of the above is...not how I am actually doing it for 320x200, but how I would do it if I didnt have the Datapath Vision RGB capture card.
Because you get some bad noise on 320x200 and I havent found any good way to get rid of it via the Tink OR OBS. I linked two videos showing it in this thread. Re: CRT Terminator Digital VGA Feature Card ISA DV1000
This noise is especially visible on dark mono-colored screens. Like the text boxes in Lands of Lore or Eye of the Beholder, Inventory in Might and Magic 3, or just walls in Dungeon Master 2. This also happens on both of my graphics cards (Voodoo 3 and Trio64).
While visible to the eye in preview, the issue gets worse if you compress it with AV1. I was happy to upgrade my graphics cards to a RTX5060ti recently and use the much more effective AV1 coded. However, the compression means that the grain gets more visible. It basically turns into "clouds" which look pretty bad.

Now...there is the option to use the CRT Terminator card. Thread here: CRT Terminator Digital VGA Feature Card ISA DV1000
It's a card you put into your ISA Slot. It snoops the ISA bus and in addition gets plugged into your Graphics cards feature connector, and turns this into a perfect digital signal. No grain.
The Downsides:
-This thing costs 300€ for me if I consider shipping and whatnot.
-For our usecase we wouldnt even use any of the scaling features the cards offers (only passthrough), so I guess a chunk of that strong fpga used would be "wasted". The developer mentioned something along the lines of that a smaller fpga might not help much to reduce costs, because then the order numbers would be less for that small fpga, and in turn increasing the price again
-Your graphics card needs to be compatible. Meaning it needs to be PCI or ISA (not AGP) and the feature connector needs to work (it doesnt on the Voodoo 3)

So for now I did not do that, but still consider the investment.
My workaround is a...silly one, but one you wont have.

Since I use my Datapath for capturing, I have the program VCS. And VCS has a cool feature called "Pixelgate". And this let's me vastly reduce any noise for the cost of a theoretical small delay which I dont notice. The noise turns to "static" non moving noise, reduces file sizes and isn't apparent in the recording. But capturing that is a bit weird. Because how it works is that VCS gets the signal, uses the pixel gate, displays it on the screen. And now I use OBS to capture the VCS window. Which...yes is kinda stupid but it works. However, because everything not on the screen is cut off and I don't want to put VCS in fullscreen to also see framerate for example, I actually forward a 1280x1000 signal to my PC, not the 1920x1080. This gives me 80 pixels for that ugly windows bar and it still fits on the screen (the windows bar is not captured).

I guess this could be done with an OBS filter. But well...someone would make it first. I did post suggestions on Discord and OBS forums but nobody cared, though I guess this would also be super helpful for camera grain. Anyways...guess not going to happen or only happening by coincidence.

For OBS Capture I use
Hybrid MP4 with NVIDIA NVENC AV1
Constant QP with a value of 18. Preset: Slow (good quality), Tuning: High Quality, Multipass: Two passes (Qarter Resolution), Look adead active, 0 B-Frames.

Reply 4 of 9, by Dusko

User metadata
Rank Member
Rank
Member

Thanks Kordanor.

I finally got the Magewell HDMI capture card (dongle). It seems to work well and has a lot of settings to experiment with.
I'm still trying to understand a few things you mentioned, and again, thanks for the detailed explanation.
That Digital Feature card seems promising, but it's a bit expensive. (I don't blame them)
I'm happy to hear you got a RTX5060ti, nice!. I have a RTX5060 (8Gb) on my Laptop and a 5070 (12Gb) on my desktop, that's where I do all the Davinci stuff.

I have a few questions:

1. Where in the Tink4K menu are you setting those custom resolutions? Then you record it in 1080p or 4K in OBS, is that correct?
2. When a game switches between different resolutions, do you switch profiles on the fly? I imagine that could get a bit tedious. I switch profiles when I see that the borders are cut off.
I usually manage to crop to 4:3 at some point in the game, and that sometimes works for the entire session (menus, gameplay, cutscenes). It’s a bit odd, but in some cases it does the trick.
3. My current workflow is recording in 1080p in OBS, then editing in DaVinci Resolve and exporting in 4K for YouTube. Is that a valid approach?
I also read somewhere that using filters in OBS can break 4:4:4 chroma sampling and reduce it to something else, but I’m not sure if that’s accurate.
4. Have you suggested the idea of implementing the “Pixelgate” feature in the Tink4K Discord?
5. Have you tried the Smoothing / Algorithm option on the Tink4K? I’m curious if that helps. I can’t test it with Lands of Lore right now, but I will later.

When it comes to pixel-perfect accuracy, I’m generally happy with my results. I’d probably need to have a CRT next to it to really notice if some shapes or text are slightly off. If I see an oval shape that should be a circle, I redo the 4:3 crop on the Tink4K, and that usually seems to fix it (or at least I think it does), but I may be wrong.

Thanks again!

Retro PC games channel: http://www.youtube.com/@myRetroPC
Electronics, mods and tools channel: https://www.youtube.com/@RetroRust-75
Reddit: https://www.reddit.com/r/MyRetroPC
FaceBook: https://www.facebook.com/myretropc

Reply 5 of 9, by Kordanor

User metadata
Rank Member
Rank
Member

Oh, I just mentioned my upgrade to the RTX 5060ti specifically because I used the GTX 1070 before. And that card did not support hardware AV1. But it is due to the use of the much better and effective AV1 codec, that this grain gets multiplied which then becomes a bigger problem.

To your questions:
1. Custom Resolutions: In the Tink in Scaling/Cropping, then set to free form and adjust accordingly. Most stuff I would then just output at 1080p 60fps. As mentioned due to my "workaround" I actually output 320x200 as 1280x1000 signal, but thats a custom output, which doesnt make much sense for you. Just set 1280x1000 within the 1920x1080 signal instead.
2. Resolution Switching on the Tink4k is automatic. One Profile contains all kinds of settings. Whereas a setting is determined by Input Signal Resolution and Refresh rate.
So if you switch from 640x480 to 800x600. That is automatic, and saved within the same profile. However, due to the Joy of DOS using 720x400 70Hz a lot (for Text mode, for 320x200 and for 640x400, which is e resolution which even exists in 2 variants) this is where you need different profiles. So I have one main Profile which covers 320x200 Games. And for Text Mode I use a different profile. There is no way around that (I guess the CRT Terminator might be able to deal with that)
3. In the end it doesn't matter. Just make sure you are using Integer Scaling to scale from 1080p to 4k. In OBS that is called "point". If you use Resolve for scaling then make sure to go to Scaling for that clip and set it to integer, and ofc make sure you set it to 200% in zoom.
4. Yes, I did suggest it. But creator said it's not possible by being too complex/resource intensive. So not being very deep into that myself but having some knowledge of programming I would guess that it would be possible, if was restrictive enough (e.g. low resolution only up to 3-5 frames or something like that). But can't really argue here, its his thing, and its obvious that PC is also no priority (though I guess this could also help with some console stuff). On top of that this pixelgate would mostly be for recording, and not important for live display (as mentioned, compression doesnt like grain, while visible without compression, it doesnt look that bad).
5. I tested various stuff. But can't say anymore what exactly. If you find a solution, let me know ^^

Reply 6 of 9, by Kordanor

User metadata
Rank Member
Rank
Member

Just looked into the settings again. I guess you were talking about the deinterlacer/Film setting. I changed these and cant notice any difference at all. Might be that they are not even applied to that signal as its not interlaced.

Reply 7 of 9, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
Kordanor wrote on 2026-03-05, 03:49:

Because how it works is that VCS gets the signal, uses the pixel gate, displays it on the screen. And now I use OBS to capture the VCS window. Which...yes is kinda stupid but it works.

Doesn't OBS have the ability to capture OpenGL directly?

Reply 8 of 9, by Kordanor

User metadata
Rank Member
Rank
Member

Hrm, I think I get what you mean. Setting the renderer in VCS to OpenGL and then setting OBS to captue a "game". This does work, even if the VCS window is then not fully on the screen. But it will still break if the task is minimized, and it comes with the additional issue that OpenGL is not saved as renderer apparently. At least according to how OBS behaves. As it shows OpenGL as active in VCS but I always need to click on it again for it to work. So I guess it always starts with software and the display is incorrect (and shows whatever was active last). But thats just my guess.

Reply 9 of 9, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie

The setting not sticking could be a Windows or OBS issue or something that was fixed in the years since then. Offscreen OpenGL capture with my current private Linux version works with SimpleScreenRecorder, so decent chance it works with the last public Linux version. Only pauses when the window is minimized. Never tested with OBS.