Can wait months.
Icr2 and CGL is more priority.
As for my bugs reports, maybe it's good idea to have something like github? So reports will be not drowned down like in forum, for future reading/fixing?
- vQuake 2 particles. The perspective correction for particles is disabled for vHexen 2 and vQuake 1. These two don't appear to need it in both vanilla and antialiased modes.
- The keyboard code for ICR2 should be perfect now.
Last edited by sharangad on 2025-08-28, 11:36. Edited 1 time in total.
RaVeN-05wrote on 2025-08-28, 11:09:Popular games and exclusive games (stick to their api / vcard) is more priority as I think, and thinks it's logical, like glrage […] Show full quote
Popular games and exclusive games (stick to their api / vcard) is more priority as I think, and thinks it's logical, like glrage for supporting 3 of their exclusive 3d rage games. Or tied apis in priority (tied I call where particular game can have many api renderers but none of them got wrappers nor emulators).
Best looking in defined api games is could be counted as in priority.
For example mech warrior 2 exist on many api renderers but only SGL api (PowerVR) is best looking, since have more graphical effects than rest of their apis.
Hexen2 and majority of dos games is better on RReady.
Especially soda, Rebmoon, Icr2, flawless now.
P. S. Dos even more better I think because dosbox provider more stable execution than old win32 apps on new windows oses.
You're right. It's just that I don't like sitting on fixes. Most of the fixes are for games that aren't very popular so I guess it can wait.
Will try to place all the bugs i found on sourceforge tickets , so to easy found and not lost them.
Streamed: Tomb Raider
found bugs:
after quiting game , dosbox have garbage font.
i once get lost of character control for several seconds, like buffer with input is slowly sends commands to game , so i can't control character properly , maybe its dosbox keyboard handling shared problem? https://www.twitch.tv/videos/2551851697?t=00h29m44s
FrameGeneration should be turned off for this game, its can boost from 30-fps to 60 fps, but there will be HOM deffects, like shadow under Lara is HOM and menus their backgrounds not cleared
Usabiliy of RReady UI:
there is chance you can quickly press "Launch" button two or more times, which will run more than one dosobox instances, later even will report errors , of they cant copy files.
possible fix - make "Launch" button to self-lock for second , grayed for second. Also there is not necessary to copy entire dosbox files to some location, i guess it can run out of rready folder directly?
so also duplicated to sourceforge tickets , so now its will be not missed.
again lowest ever priority for that.
RaVeN-05wrote on 2025-08-28, 13:15:Will try to place all the bugs i found on sourceforge tickets , so to easy found and not lost them. […] Show full quote
Will try to place all the bugs i found on sourceforge tickets , so to easy found and not lost them.
Streamed: Tomb Raider
found bugs:
after quiting game , dosbox have garbage font.
i once get lost of character control for several seconds, like buffer with input is slowly sends commands to game , so i can't control character properly , maybe its dosbox keyboard handling shared problem? https://www.twitch.tv/videos/2551851697?t=00h29m44s
FrameGeneration should be turned off for this game, its can boost from 30-fps to 60 fps, but there will be HOM deffects, like shadow under Lara is HOM and menus their backgrounds not cleared
Usabiliy of RReady UI:
there is chance you can quickly press "Launch" button two or more times, which will run more than one dosobox instances, later even will report errors , of they cant copy files.
possible fix - make "Launch" button to self-lock for second , grayed for second. Also there is not necessary to copy entire dosbox files to some location, i guess it can run out of rready folder directly?
so also duplicated to sourceforge tickets , so now its will be not missed.
again lowest ever priority for that.
The console corruption occurs on real hardware as well with both v2k and v1k.
https://nirvtek.com/downloads/RReady.Alpha.20250829.002.7z
MD5: 4ebff76757f2934e50eee77d99224871
1) Disable the Launch button for a few seconds
2) Compare the files being copied to see if they're the same, if they are the file isn't copied. The app will take a slightly longer time to launch since each and every file is compared.
3) should hopefully correct input lag in test builds with the new keyboard code.
1) Disable the Launch button for a few seconds after launch. This release all the fixes of the previous builds and corrects glitchiness with disabling the launch button when pressing [ENTER] or using the arrow keys to navigate the apps list
2) When launching compare the files being copied to see if they're the same, if they are the file isn't copied. The app will take a slightly longer time to launch since each and every file is compared, but SSD wear should be minimal.
3) This release should hopefully correct input lag seen in test builds:
Whiplash:
menu blinking in whiplash
select option have black outline in main menu
inconsistent fps in gameplay varies from 30 to 70
from time to time there single frame where most polygons missed or painted with wrong colors.
in game play (at racaway) there is right most and bottom most edges of the screen have mess lines
RaVeN-05wrote on 2025-08-31, 09:28:Whiplash:
menu blinking in whiplash
select option have black outline in main menu
inconsistent fps in gameplay varies from 30 to […] Show full quote
Whiplash:
menu blinking in whiplash
select option have black outline in main menu
inconsistent fps in gameplay varies from 30 to 70
from time to time there single frame where most polygons missed or painted with wrong colors.
in game play (at racaway) there is right most and bottom most edges of the screen have mess lines
Does anyone know if the Radeon R7 240 has a problem with geometry shaders? I tried all the vQuake engine games and the QSpan rendering is broken horribly. In SODA offroad the line renderer, which also uses a geometry shader, works fine.
This suggests to me that somehow my GS are non-compliant, but they do work on nvidia and intel HD.
Thanks for that, but the geometry shader in that post has an obvious bug. Mine may be bugged, but I can't figure out what it is.
In the case of the shader there there's an assignment a = (b+=10) (something like this). It should be a =(b+10). For += to work the line would have to start with b +=.
Try ChatGPT here.
also commented on recent patreon message.
in short:
AMD vs nVidia, that's all the times like that - nVidia tries to be much as possible to be backward compatible and keep legacy opengl.
AMD is more aggressive here by dropping a lot in favor of speed.
its as simple my exploration trough time. this is why i prefer nVidia , and AMD user in past, i still do have old Radeons, way old like 9X00 series.
As i asked ChatGPT and got very detailed explanations, i simply can short it to one answer - for AMD use 330 core profile
Also from CHatGPT - AMD can fail GS in compiling stage, need somekind of debugger or way opengl will write compiling errors into console.
i recall watching TheCherno OpenGL tutorials , he showed how compiling errors could be printed to console. Well i weak here , i can say what i saw and experienced just.
Try ChatGPT here.
also commented on recent patreon message.
in short:
AMD vs nVidia, that's all the times like that - nVidia tries to be much as possible to be backward compatible and keep legacy opengl.
AMD is more aggressive here by dropping a lot in favor of speed.
its as simple my exploration trough time. this is why i prefer nVidia , and AMD user in past, i still do have old Radeons, way old like 9X00 series.
As i asked ChatGPT and got very detailed explanations, i simply can short it to one answer - for AMD use 330 core profile
Also from CHatGPT - AMD can fail GS in compiling stage, need somekind of debugger or way opengl will write compiling errors into console.
i recall watching TheCherno OpenGL tutorials , he showed how compiling errors could be printed to console. Well i weak here , i can say what i saw and experienced just.
Thanks Raven. I started out with #330 compatibility. I'll try #330 core. The problem is that 330 core doesn't run on Intel HD2000, which while being quite slow, is the minimum.
I'll try printing out the logs. There're no errors, if there were RReady would stop and open the log file. There may be warnings.
probably AMD needs separate approach , like if AMD being detected branch to other GLSL code
No warnings with AMD either. It looks like it's the matrix (modelview-projection) that's the problem. It renders the spans all over the place, just not in the right place.
The line shader, which works just like the qspan shader works fine. It's just the QSpan shader which doesn't work.
Chat GPT found five bugs in the shader. The first three, unused parameters pos and l and using the reserved word length for a parameter have been fixed. The other two 'bugs' are necessary for Rendition to work and shouldn't be fixed, changing pixel format from .gbar to bgra (this is related to the internal format of Rendition textures) and the divide by z (perspective correction) in the fragment shader. This will never cause a divide by zero and the floor it suggests (1e-6) will break the shader. The game will break before this Z becomes 0.
Even with the two actual bugs fixed, it still doesn't work on R7 240 🙁.
Shaders I heard from TheCherno is sensitive, and even not throwing any warnings or errors could just not work.
There might be some other commands or state changes must be invoked.
Something like: select attrib, select program, activate program, use program.
I don't exactly knew.
The nVidia can be more tolerant for missing opengl commands, while for amd it's need to be specified exactly.
1// Use a neutral VAO for client arrays, or VAO 0 in compat 2glBindVertexArray(0); 3 4// No VBOs when setting client pointers 5glBindBuffer(GL_ARRAY_BUFFER, 0); 6glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
Didn't work.
.[edit]
One of the vertex attributes, a float, works just fine. The vec4 does not. I tried splitting it into 4 floats, but it worked on nvidia but not on the Radeon. I *think * the Radeon is doing some sort of scaling or something when passing an int to a float parameter. I even tried passing it as an ivec4 4 component integer vector but once again, only works on nvidia. The parameter which works is a signed short (2 bytes) not a signed int (4 bytes)