VOGONS


Patch for OpenGL fullscreen bug

Topic actions

Reply 20 of 39, by VileR

User metadata
Rank l33t
Rank
l33t
dosmax wrote:

BTW1.: the scanline effect looks a bit too strong with this shader for my taste. At least I can't remember any real sreen that showed completely black space between scanlines. If visible at all, this should be a more subtle effect. But that will most likely strongly depend on the age and/or type of the simulated monitor.

CGA and EGA monitors actually did do that - scanlines are very visible in e.g. 320x200 modes on these monitors. Maybe the space between scanlines isn't completely black, but that's due to photon scattering (or to age-related loss of beam focus)... the actual scanline effect w/ the shader still looks really similar.

VGA and later did something else; in e.g. 320x200, the scanlines were doubled vertically, so what was sent to the monitor would be 400 vertical scanlines. If you look closely at a CRT you can see it. You get much less space between scanlines, and they don't look as thick and strong as with this shader... so yeah, for VGA games this isn't 100% the authentic effect I guess. Still I don't mind it 😁

And yeah, it would be great to have other genuine CRT effects, like the shadowmask (RGB dots), photon scattering (glow), flicker, and phosphor persistence. not sure if it can be done with current technical limitations though.

[ WEB ] - [ BLOG ] - [ TUBE ] - [ CODE ]

Reply 22 of 39, by VileR

User metadata
Rank l33t
Rank
l33t

I tried to, but it looks a bit wonky and moire-patterned on my 1280x1024 monitor... i don't think there's a solution other than "buy more pixels" 🤣

[ WEB ] - [ BLOG ] - [ TUBE ] - [ CODE ]

Reply 24 of 39, by NY00123

User metadata
Rank Member
Rank
Member

Well, it has been a while since the last post. So what can be added for now...

- First let's get back to the original topic: The OpenGL borders bug. So far, no actual patch (code-wise) has been posted, except for the first one removing a column and a row. So, I have attached a fix, which is an application of one of the approaches described by FrodeSFS beforehand. Basically, it involves the allocation of more RAM for the video output than currently done, filled with zeros and used for an initial texture output. The patch should be compatible with r3793 and has been tested on Windows and Linux. However, the buggy borders don't always appear here even *without* the patch, so there may be a minor level of uncertainty.
- As for the shaders, the idea itself is truly a nice one, especially when it is combined with taking advantage of GPU power. I guess the OpenGL CRT Shader could work for multi-platform support, if there was a patch for GL shaders. I'm not aware of one at the moment, though.

Reply 25 of 39, by SquallStrife

User metadata
Rank l33t
Rank
l33t
VileRancour wrote:
dosmax wrote:

BTW1.: the scanline effect looks a bit too strong with this shader for my taste. At least I can't remember any real sreen that showed completely black space between scanlines. If visible at all, this should be a more subtle effect. But that will most likely strongly depend on the age and/or type of the simulated monitor.

CGA and EGA monitors actually did do that - scanlines are very visible in e.g. 320x200 modes on these monitors. Maybe the space between scanlines isn't completely black, but that's due to photon scattering (or to age-related loss of beam focus)... the actual scanline effect w/ the shader still looks really similar.

They sure do.

v0HV3l.jpg

Delicious scanlines.

Looking forward to having a bash at this DOSBox shader, it looks rad!

Edit: They're actually a bit blown out in this photo. In real life they look very dark and defined.

But needless to say, there's nothing like scanlines for retro gaming.

(Click for full size)
MlroZl.jpg

VogonsDrivers.com | Link | News Thread

Reply 26 of 39, by VileR

User metadata
Rank l33t
Rank
l33t
SquallStrife wrote:

But needless to say, there's nothing like scanlines for retro gaming.

amen to that... maybe in a few years (and with a generous supply of GPU cores) we'll be able to pull off something like this in realtime? :

INUg3.png

[ WEB ] - [ BLOG ] - [ TUBE ] - [ CODE ]

Reply 27 of 39, by Targaff

User metadata
Rank Member
Rank
Member

I use (and love) the CRT.D3D shader, but I don't s'pose anyone with the requisite knowhow (tragically not me) would be interested in porting the dot'n'bloom shader? I freely admit that I have no pressing need for it, I just like the slightly different look it offers.

Intel CC820 | PIII 667 | 2x128MB SDRAM | 3Dfx Voodoo 5 5500 @ Dell P790 | Creative SB PCI128 | Fujitsu MPC3064AT 6GB + QUANTUM FIREBALLlct10 10 GB | SAMSUNG DVD-ROM SD-608 | IOMEGA ZIP 100 | Realtek RTL8139C | Agere Win Modem

Reply 29 of 39, by NY00123

User metadata
Rank Member
Rank
Member

Hi all,

Back to the original topic again (about the OpenGL border bug), I have a guess for the cause of it. Let me quote something from http://sdl.beuc.net/sdl.wiki/SDL_SetVideoMode:

User note 2: Also note that, in Windows, setting the video mode resets the current OpenGL context. You must execute again the OpenGL initialization code (set the clear color or the shade model, or reload textures, for example) after calling SDL_SetVideoMode. In Linux, however, it works fine, and the initialization code only needs to be executed after the first call to SDL_SetVideoMode (although there is no harm in executing the initialization code after each call to SDL_SetVideoMode, for example for a multiplatform application).

This may explain why I don't see the border bug on vanilla DOSBox if I run it within Windows, while it is seen on Linux while using exactly the same PC.

As a workaround, I have attempted a simple modification of the sdlmain.cpp SetSize code. Basically, it calls SDL_SetVideoMode with no GL flag set and then again with the flag, so OpenGL context reset is forced.

Unfortunately, this means you can see the desktop for a short moment while the emulated video mode changes. At least this is the case here...

A modification of SDL code or manual OpenGL context reset may be required. Maybe someone can come up with a better solution, as I'm not *that* familiar with OpenGL...

Reply 30 of 39, by TeaRex

User metadata
Rank Member
Rank
Member
FrodeSFS wrote:

What I have done in my own DOSBox build is this:
- when dosbox changes resolution, I create a "bitmap" in memory (well, just a char buffer), initializes is to all zeroes (with memset), and uploads to the dosbox opengl texture using glTexture2D - causing the entire texture (including the parts outside the DOS screen) to be set to black, overwriting the garbage that would otherwise be stored there. This effectively removes the display artifacts you are seeing.

Hello FrodeSFS, in case you're still around, would you care sharing that bit of code?

tearex

Reply 31 of 39, by NY00123

User metadata
Rank Member
Rank
Member

Hello FrodeSFS, in case you're still around, would you care sharing that bit of code?

While we may not see the code itself, it isn't hard to reproduce it.
However, I have just realized in the last week that there may be a somewhat better solution (although a bit hackish as well):
1. Clear the viewport using glClear (already done in vanilla DOSBox),
2. then copy a portion of the viewport's contents to the texture using glCopyTexSubImage2D.

In theory, a GPU should be capable of doing such a copy on its own, without applying an expensive transfer of data from CPU memory to GPU memory.

Newer versions of the OpenGL API may let one do so in a less hackish manner, but what is given here requires no more than the additional usage of an OpenGL 1.1 function.

Reply 32 of 39, by dugan

User metadata
Rank Newbie
Rank
Newbie

As of the current -svn, I actually think that problem might be the code that uses pixel buffer objects to upload pixel data to the texture before drawing a new frame. In my tests, the corruption went away after I disabled them and used only the code paths that assumed they weren't available.

Reply 33 of 39, by x86++

User metadata
Rank Newbie
Rank
Newbie
--- sdlmain-ORIG.cpp	2016-02-20 18:14:58 -0500
+++ sdlmain.cpp 2016-02-20 18:13:05 -0500
@@ -1499,6 +1499,7 @@ static void GUI_StartUp(Section * sec) {
sdl.opengl.pixel_buffer_object=(strstr(gl_ext,"GL_ARB_pixel_buffer_object") >0 ) &&
glGenBuffersARB && glBindBufferARB && glDeleteBuffersARB && glBufferDataARB &&
glMapBufferARB && glUnmapBufferARB;
+ sdl.opengl.pixel_buffer_object=false;
} else {
sdl.opengl.packed_pixel=sdl.opengl.paletted_texture=false;
}

Reply 34 of 39, by dugan

User metadata
Rank Newbie
Rank
Newbie

I think it's also worth mentioning that I've encountered this "sides of the screen are corrupted" bug on my friend's Hackintosh, which had a GTX 760, and on my Slackware box, which has a GTX 970. Both of these computers were built at least partly for games, and therefore use the binary NVIDIA drivers.

I did not ever encounter it on my 2015 Macbook Pro, which of course has a recent Intel GPU. Not in Linux (which I run on it) and not in OS X.

Therefore, I think this may be specific to NVidia's OpenGL implementation.

And if any of you still want to use bsnes's CRT and dot-n-bloom shaders, I'll be adding them to My DosBox fork (which also fixes this problem) soon.

Reply 35 of 39, by tauro

User metadata
Rank Member
Rank
Member

This one does it for me, I took x86++'s patch and just adapted it to the current SVN revision (r4000).

--- a/src/gui/sdlmain.cpp       2016-10-02 18:10:03.000000000 -0300
+++ b/src/gui/sdlmain.cpp 2016-11-08 03:59:34.249902561 -0300
@@ -1318,6 +1318,7 @@
sdl.opengl.pixel_buffer_object=(strstr(gl_ext,"GL_ARB_pixel_buffer_object") >0 ) &&
glGenBuffersARB && glBindBufferARB && glDeleteBuffersARB && glBufferDataARB &&
glMapBufferARB && glUnmapBufferARB;
+ sdl.opengl.pixel_buffer_object=false;
} else {
sdl.opengl.packed_pixel=sdl.opengl.paletted_texture=false;
}

I'm not a programmer and I don't quite understand what does this particular line of code do. Could somebody please elaborate a little bit on what it does?

Reply 36 of 39, by gulikoza

User metadata
Rank Oldbie
Rank
Oldbie

It's an optimization. Dosbox basically draws the image pixel-by-pixel to a texture. Without pixel_buffer_object, the texture is created in system memory and then uploaded to graphics card. This means that each frame was basically processed twice (not really, but for easier understanding...) - dosbox has to wait for upload to graphics card to complete (because each OpenGL call has to return with a known state) when it has already finished drawing and it's doing nothing in between....With pixel_buffer_object we tell the graphics card driver what we will want to do with the texture, so it can upload it in the background without having dosbox to wait for it.

Since this is a retro forum, there might be some history with it as well. AGP cards had a feature that was known as AGP Fast Writes. It basically allows the CPU to write directly to graphics memory without writing to system RAM first. This provided little gain with games at that time (since most textures are on the disk, uploaded to graphics card usually on level load and stay in graphics card ram) and it was buggy with some boards...but it's exactly what dosbox does - it writes pixels from CPU to the texture. NVidia had a proprietary extension that allowed this feature to be used with OpenGL and this has later evolved into GL_ARB_pixel_buffer_object. ATi (AMD) never really supported this with OpenGL and maybe it's still not properly supported.

I haven't really used OpenGL output for a while (and most of these extensions were Windows only...), but it used to be that OpenGL really used a full CPU core just for texture uploads and was basically useless without this extension. This was one of the reasons I started working on D3D output for dosbox.

Also, this might have changed with PCIe cards since I don't think it supports something like AGP Fast Writes, but still pixel_buffer_object should provide some benefit. There's some reference here: http://www.songho.ca/opengl/gl_pbo.html along with test programs. pboUnpack is what dosbox uses.

http://www.si-gamer.net/gulikoza

Reply 37 of 39, by tauro

User metadata
Rank Member
Rank
Member

Thank you very much for your explanation gulikoza.

From what I can gather, it should be a lot slower, shouldn't it?

I haven't tried it extensively but so far I notice no change in speed.

(By the way, thank you for your glide patch)

Reply 38 of 39, by gulikoza

User metadata
Rank Oldbie
Rank
Oldbie

Depends on the drivers really, but I imagine current drivers are much more optimized than they used to be, even without PBO.
In any case, using PBO shouldn't cause any corruptions and shouldn't be slower even if it's not faster.

I assume there's some buffer that isn't cleared and that's the reason for the problem. Sure, turning off PBO might work, but finding what causes the problem with PBO would be even better 😁

http://www.si-gamer.net/gulikoza

Reply 39 of 39, by dugan

User metadata
Rank Newbie
Rank
Newbie

Hmm... I second the thanks for the explanation, gulikoza. I was wondering why they were used at all.

Attached is a patch, made against r4000, that actually removes them from the code, instead of just disabling them. This is the patch to apply for people who would prefer to deal with the borders corruption issue by having DosBox not using pixel buffer objects.