VOGONS


First post, by appiah4

User metadata
Rank l33t++
Rank
l33t++

This has always confused me. Quake 3 has seperate options for color depth and texture quality. Correct me if I'm wrong but my understanding is that in order to achieve a true 32-bit output you need to use 32-bit textures and 32-bit rendering.

Now my first question is, what happens when you use 16-bit textures but 32-bit rendering? You get low quality textures but high quality lighting effects and smoother gradients, right?

And vice versa, what happens when you use 32-bit textures and 16-bit rendering? Presumably this produces an image with higher quality textures but all rendering and processing will be 16-bit so the improvements over 16/16 should be marginal or none?

And finally, I believe Voodoo 3 outputs 16-bit as an equivalent to 22-bit image through some driver post process filter - What are the optimal settings for this? Should I stick to 16-bit color depth and texture quality? Considering the card is supposedly limited to 16-bit color depth and 256x256 texture size, why am I seeing considerable improvement when using 32-bit color depth and texture quality? It's definitely not placebo effect, so I'm probably not understanding how these options work for the Voodoo 3..

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 1 of 3, by uzurpator

User metadata
Rank Newbie
Rank
Newbie

Can you even set 32-bit rendering on a V3 in quake 3?

I'm not going to pretend that I am an OpenGL guru, as I still learning, but...

All color arithmetic is done in normalized 0..1 float, however those values are encoded in textures as integers. For 32bit textures it's 8 bits per color channel, for 16bit textures its either 4 or 5 bits per channel. I don't rememer Id Tech3 16-bit texture format, but let's assume it 4bits/channel. This has a profound result on the final image quality because the more bits per color - the more percise the arithmetic.

Color operations pipeline looks like this:

source_textures -> shading_arithmetic -> result_image

16-bit texture setting means that colors from sources are input as 4 bit values - or 16 shades.
32-bit texture setting means that colors from sources are input as 8 bit values - or 255 shades

16-bit rendering means that result image can have no more then 16 shades per color. This will result in dithering and visible color boundries.
32-bit rendering means that result image can have no more then 255 shades per color, so transitions are very smooth, practiaclly invisible to human eye.

It is possible to have very fine values for output color when starting with 16 bit inputs.

16/16 setting means that coarse values are put through the graphic pipeline and output as coarse value. Dithering and visible tone changes are abound.

16/32 setting menas that coarse values are put through the graphic pipeline and output as fine values. Dithering does not happen and tone changes are less visible.

32/16 setting means that fine values are put in, but coarse put out. In practice this will be very similar to 16/16.

32/32 - obviously fine values are in, fine are out, so smoothest color transitions and no dithering. However it will look very like 16/32 settng as textures are just a part of the whole graphic pipeline and due to texture filtering and mip-mapping finer color values are going to be extrapolated anyways.

In practice - 16/32 bit texture setting is for conserving VRAM ( Q3 can be run on 4MB VRAM GPU ) while 16/32 bit rendering is for performance.

Die ewigkeit ist hier und jetzt.

Reply 2 of 3, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie

32-bit implies alpha channel, which doesn't make sense for a frambuffer? unless there is some wacky blending going on like what a fragment shader would do and you need information in that extra channel, but Q3 doesn't use shaders (GLSL ones anyway), rather it uses its own set of blending rules for textures on primitives. I wonder why its called 32-bit rendering o.0?

16-bit textures in '32-bit' rendering would indeed appear like degraded texture quality (since they hold data values only representable in 16-bit...their original format). 32-bit textures in 16-bit rendering is indeed a bit pointless because the texels would be converted to 16-bit on the fly to be rendered into a 16-bit framebuffer, however this downsampling may produce a slight different result from a 16-bit texture depending on the method used, but eveything would appear with 16-bit precision.

On a voodoo3 though, I doubt the this would make much difference since everything is downsampled to 16-bit (dithered). i.e it doesn't matter what the initial colour depth is, it's going to be squeezed through a 16-bit framebuffer so inevitably its going to be 16-bit at some point. Maybe the dithering used for downsamlping the 32-bit is better at retaining extreme constrast values (actually worse dithering if you were to view it raw, but would be better to filter when upsampling). Certain dithering methods would allow a box filter to produce a more accurate colour gradient than other types of dithering. Without knowing extacly what dithering method is used its impossible to say. You may get better results by changing the filtering that Voodoo3 does, certain filtering may work better on upsampling 16-bit. Different dithering may work better but I don't think you can change that on Voodoo3, I don't have mine installed so couldn't say.

That voodoo3 box filtering is done to up the precision of whats held in the framebuffer for display. Dithered images can very easily be sampled to produce a much smoother gradient (at the end of the day, its the 'shaded' part which benefits most from higher precision textures since this is where colour gradients tend to form, ergo the parts that tend to show banding if precision limited). It looks better than 16-bit, but isn't exactly 24-bit. There will be extereme colour contrasts that I don't the filtering could reproduce from 16-bit like a true 24-bit.... but it's good enough and probably not noticable without proper image analysis imo...

uzurpator wrote:

All color arithmetic is done in normalized 0..1 float

These days yes. Shaders use float values for colour, however traditionally this wasn't always the case and the driver would convert your texture (be it GL_UNSIGNED_BYTE/GL_FLOAT etc) to best fit the hardware (this usuallly involves padding or in some cases, full on compression could go on in the background unknown to the user). Voodoo3's method for downsampling for framebuffer, and then upsamlping could be considered a form of compression dictated by the hardware, unknown to the client.

Reply 3 of 3, by leileilol

User metadata
Rank l33t++
Rank
l33t++

It'll be forced to 16-bit on the Voodoo3 no matter what you do or what the settings looks like. Voodoo4/5 is when 32-bit first comes into play for 3dfx.

spiroyster wrote:

Different dithering may work better but I don't think you can change that on Voodoo3, I don't have mine installed so couldn't say.
.

Only the dithering for blended stuff may be changed from smooth (double-sized 2x2 pattern) to sharp (2x2 pattern)

uzurpator wrote:

16-bit texture setting means that colors from sources are input as 4 bit values - or 16 shades.

RGBA4444 for transparent stuff, RGB565 for everything else.

apsosig.png
long live PCem