VOGONS


How is 16-bit dithering controlled?

Topic actions

Reply 20 of 22, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
The Serpent Rider wrote:

If it was there, then wasn't, then was again

Once again - you can't get dithering on Fermi, Kepler and Maxwell GPUs with fresh drivers. Period. So it seems like anything after G80 and before Pascal series just can't render 16-bit modes.
Also you still get dithering on any card before G80 with driver that support it.

Doesn't surprise me o.0

spiroyster wrote:

I can't imagine the rationale behind re-implementing this feature in these kinds of modern architectures.

They can render 16-bit modes just fine. It's not the input (16-bit colours specified by the program) that is important, its the display device colour depth/buffer size (gfx card) and display capabilities that are important which will probably be fixed precision (but not fixed at 16-bit). Older cards with 16-bit buffers displaying on CRT screens (which can display many more colours) had to do something to mitigate the inevitable banding so they implemented hardware dithering (which while related to 16-bit is not a by-product of rendering 16-bit). Developer could still turn off dithering and perform the dithering processing via CPU on the buffer as they desired, but why bother if there is accelerated hardware dithering available to you.... or the hardware ignores your request to turn off dithering (not so much conforming in the standards back then)... or decide that banding is the way (cell shading used it to good effect, through fragment shaders though o.0) and turn off dithering and degrade everything to 16-bit gamut, maybe even 8-bit posterized!. Also textures themselves can be dithered in advanced by an artist, as well as dithering happening as part of the graphics pipe. Personally, it's not something that I have had to worry about other than... "ah bollocks I can't seem to create a 24-bit context, back to the white book (would be red book these days)."

Different types of dithering:
http://www.tannerhelland.com/4660/dithering-e … ms-source-code/

Agree with you though, it would appear they have removed 'default' dithering from later cards/drivers. Yes this is a problem when playing old games since the default position of the display driver now, is to not dither an output that the original developers perhaps anticipated would be dithered without them doing anything themselves. At the same time given the 'general processing' capabilities, there would be little reason to implement a dedicated dithering in the hardware, rather use the massive capabilities of the GPU to do this (some dithering techniques are embarrassingly parallel since all that is needed for is the fragment colour, and the x,y location in the frame).

@swaaye

</pure speculation>
From the drivers point of view, the simplest thing to do is simply accept the colour values being pushed through the API as 16-bit colours, convert each channel to 8-bit (or cards/GPU native precision), and then simply raster with the full 24-bit gamut that would no doubt be supported (if not native) by the frame buffer. This would present a 24-bit frame, it's just that the colours when first defined were limited by 16-bit gamut. However, given the now larger range of colours, textures would still be visibly lower quality, while geometry and lighting calculations would be higher giving a rather strange result, and god knows how various AA then applied would look o.0. The wins of using a 24-bit gamut in this case is lost since, while texture filtering may eradicate some banding, the 16-bit lower precision of the texture gamut would still be visible, and more noticeable when certain conditions are met.

The second simplest thing to do would be to simply interpolate and downgrade the entire frames colour gamut to 16-bit depth. Values would have a possible 32-bit depth gamut/range, but no colour value would be present in the frame which cannot be represented by 16-bit gamut. This would present the game as closest as possible to the original.... without dithering...what your screen shots seem to suggest 😀

In both cases, it is assuming that 32-bit frame buffer WILL be used. This is now the case for all windows 8.1/10 windowed contexts o.0. Can't confim any behavior with full-screen though which may have more flexability with choosing pixel formats.

Dithering is simple to implement, while the market for it on x86 workstations/desktops isn't huge (probably pretty much vogon-esque only, I can't think of another market for it on our systems, other than printing (but this requires more flexibility with dithering with image data, rather than relying on display dithering) maybe some image analysis may benefit from its signal reduction???) I can understand why its not here anymore. And certainly, reshade would do it, and it should look pretty similar (if not the same) using the same dithering algorithm... however something to maybe note, as mentioned in this Image Quality of various old video cards (Quake 3 comparison), VGA may present a softer image, which means higher resolution dithering will be a lot more effective on old CRT through VGA than DVI/FP crisp-ness.
</pure speculation>

All in all it does present some rather interesting project ideas, which I personally have no time to do. 😊

Can't speak for others, but personally, any time image creation/generation is involved I work in deep-colour. This allows multiple avenues for export with a wide range a gamuts/formats including stuff like HDR. While obviously, texture memory footprint/colour depth should be considered, it's never presented me with any problems IRL on desktops. Biggest problem has always been working with !power2 textures limited texture buffer sizes, and even this problem has been solved in both hardware and software for a while. I'm OpenGL though, DirectX maybe different.

swaaye wrote:

Another interesting thing to think about is Android devices. Some Android games use 16-bit color depth because it helps slow GPUs. I have seen interesting differences between hardware. For example, Intel's Atom Baytrail GPU seems to dither 16-bit color depth. I also have Tegra 2, 4 and K1 devices and the K1 is quite banded with 16-bit color depth. Don't remember what Tegra 2 or 4 look like...

Agree, given the relative size of screens and the resolutions involved, 16-bit banding is not so obvious so you can get away with not worrying about it (unless your a purist of course o.0), and dithering would be extremely beneficial. Plus the fact that if you know you have a 16-bit limitation with the display output, no need for >16-bit textures, and now you can process/load/copy twice as much in the same time consuming the same amount of power. Battery conservation is important 😀

Reply 21 of 22, by Scali

User metadata
Rank l33t
Rank
l33t
maximus wrote:

games, drivers, or hardware?

Yes.
In other words: it depends.
With older cards, there was a simple dithering algorithm hardwired into the ROPs. So whenever you did alphablending in 16-bit mode, some logic in the hardware would decide which pixels to write and which pixels to skip, based on the alpha level.
So you will see different results from different video cards, because they may use different patterns and algorithims for dithering.

Some more advanced video chips would allow the driver to reprogram the dithering pattern, so even a driver update could change the results, or drivers may use different dithering approaches on a per-application basis.

And of course, it is up to the game to decide whether to use 15-bit or 16-bit surfaces in the first place, because when using 32-bit surfaces, hardware would generally perform 'real' alphablending anyway.
The reason dithering even exists is because 16-bit pixels are either 1:5:5:5 or 5:6:5 bit formats, and 5 or 6 bits is not enough precision to get acceptable alpablending, especially with multiple alpha layers.

Newer GPUs no longer supported 16-bit pixelformats at all, so the driver would silently upgrade things to 32-bit internally, and as a result, 'real' alphablending was performed anyway, even on '16-bit' games. No dithering hardware was present (PowerVR was very early with such an approach, because their internal tile-cache was always 32-bit only, and performing 32-bit alphablending was 'free').

Newer GPUs still would reintroduce a form of alpha-based dithering, by techniques such as alpha-to-coverage. So once again there was a form of dithering, except it was now a far more generic and programmable approach, driven directly by the shaders used by the game, and mainly used for multisample-AA of alphablended surfaces.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 22 of 22, by Scali

User metadata
Rank l33t
Rank
l33t
The Serpent Rider wrote:

So anything from 8800 to 980 Ti is out of luck. Same thing goes for any Radeon before Polaris series.

FP16 and FP32 are Floating Point pixel formats, and aren't related to dithering.
FP16 seems to have gotten a boost from the mobile sector, where you could cram more processing power into the same power envelope by using FP16. As a result, NV decided to make one GPU design that fits all markets, hence desktop and workstation GPUs now had FP16 again.
Both D3D and OpenGL have special datatypes for this (ironically enough the 'half' type for FP16 was originally introduced because of the GeForce FX, which had very poor FP performance, and FP16 could greatly boost performance compared to FP32. Its competition at the time, the Radeon 9500/9700 only had FP24. Later generations bumped it up to FP32, and dropped FP16 and FP24 entirely).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/