VOGONS


First post, by Kahenraz

User metadata
Rank l33t
Rank
l33t

I understand that the last 8 bits are used for transparency. But how does this translate to a useful feature as a desktop resolution? Is this inherited by windowed applications and decides what kind of surfaces are available?

Fun fact. I used to own some kind of Matrox Millennium years ago that could do all kinds of oddball bit depths on the desktop, like 15-bit (maybe others, I can't remember). I always wondered what the purpose of this was. Maybe it was useful with CAD applications.

I also noticed that 64k colors disappeared when Windows 95 came around. I always thought that this color depth was a great novelty in Windows 3.1. There were more than enough colors to use the closest approximation, without the burden of palette swapping in 256 color mode.

Reply 1 of 25, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

15-bit is 5-bit per color without transparency. 16-bit includes transparency. Transparency is used for effects like window shadows.

Last edited by The Serpent Rider on 2022-06-13, 20:39. Edited 2 times in total.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 2 of 25, by pan069

User metadata
Rank Oldbie
Rank
Oldbie

Pros and cons. 32-bit is faster to process since it aligns with memory locations whereas 24-bit does not. However, 24-bit takes up less memory (1 byte per pixel color less) but more procssing is required to transfer data.

Reply 3 of 25, by Tiido

User metadata
Rank l33t
Rank
l33t

4 bytes per pixel needs less CPU power to calculate pixel coordinates compared to 3 bytes per pixel. 24bit only came to be as video memory was expensive and wasting 25% of it for performance sake was not an option. 32bit is still 24bits but with 1 byte unused, to get 4 bytes per pixel to ease up rendering calculations (you no longer need to multipy or do other tricks for example). The alpha channel aspect only matters for rendering storage and not the final framebuffer that the GFX card is showing you, game textures etc. could use that lone byte for soemthing useful.

T-04YBSC, a new YMF71x based sound card & Official VOGONS thread about it
Newly made 4MB 60ns 30pin SIMMs ~
mida sa loed ? nagunii aru ei saa 😜

Reply 4 of 25, by Azarien

User metadata
Rank Oldbie
Rank
Oldbie

The additional 8 bits may be used for transparency when blitting bitmaps/textures, but when it comes to displaying the framebuffer they're just discarded. So yes, 25% of framebuffer memory gets wasted. On the other hand, adressing 32-bit (4 byte) pixels is easier and faster than working with 24-bit (3 byte) pixels, which are awkward.
15 and 16-bit modes (both use 2 bytes per pixel, one bit is wasted in 15 bpp) and 24 bit mode were useful when video memory was scarce. For example, having just 1 MB of video RAM you could have 640x480 at 24bpp, or 800x600 at 16bpp, but not 800x600 at 24bpp.

Later when framebuffer memory was no longer a problem, "32bpp everywhere" became the norm.

The Serpent Rider wrote on 2022-06-13, 20:35:

15-bit is 5-bit per color without transparency. 16-bit includes transparency. Transparency is used for effects like window shadows.

That may be true for textures, but in the framebuffer 15bpp (32k colors) means 5 bits for each of red, green and blue with 1 bit alpha discarded, while 16bpp (64k colors) means 5 bits for red, 6 bits for green, and 5 bits for blue.

Reply 5 of 25, by Kahenraz

User metadata
Rank l33t
Rank
l33t

Thanks for all of these wonderful explanations. I see now that 32k and 64k colors didn't go away, they were just redefined as 15 and 16-bit. At least we didn't have the confusion of MacOS ambiguity that was "thousands" and "millions" of colors.

I always thought that 24-bit would be faster, as it had to manage less data in memory. But it makes a lot of sense why this was chosen, back when memory was so expensive. I also understand how 32-bit, despite using more colors, would actually be faster when stepping through it as aligned in memory.

Reply 6 of 25, by Tiido

User metadata
Rank l33t
Rank
l33t

Both 24 and 32 have same 8 bits per color channel, for 24 bits total= 2^24=16777216 colors. 32 won't have more color (Nowdays there's also 30 bit color with 10bit channels, where only 2 bits get wasted out of 32 bits, but monitor and video card support seems to be still quite sparse).

T-04YBSC, a new YMF71x based sound card & Official VOGONS thread about it
Newly made 4MB 60ns 30pin SIMMs ~
mida sa loed ? nagunii aru ei saa 😜

Reply 7 of 25, by bakemono

User metadata
Rank Oldbie
Rank
Oldbie

When you have a 32-bit bus like VLB or PCI, you can always set a 32-bit pixel with one bus transaction. Setting a 24-bit pixel will often take two bus transactions because of the misalignment which makes it slower.

again another retro game on itch: https://90soft90.itch.io/shmup-salad

Reply 8 of 25, by pan069

User metadata
Rank Oldbie
Rank
Oldbie
Azarien wrote on 2022-06-13, 20:40:
The Serpent Rider wrote on 2022-06-13, 20:35:

15-bit is 5-bit per color without transparency. 16-bit includes transparency. Transparency is used for effects like window shadows.

That may be true for textures, but in the framebuffer 15bpp (32k colors) means 5 bits for each of red, green and blue with 1 bit alpha discarded, while 16bpp (64k colors) means 5 bits for red, 6 bits for green, and 5 bits for blue.

And the reason the green component in 16 bit color is 6 bits and not the blue or red is because the human eye is more sensitive to green.

Reply 9 of 25, by Disruptor

User metadata
Rank Oldbie
Rank
Oldbie
pan069 wrote on 2022-06-13, 23:24:

And the reason the green component in 16 bit color is 6 bits and not the blue or red is because the human eye is more sensitive to green.

That depends on the implementation. Some drivers allow different selections, even 6-6-4.

Reply 10 of 25, by Jo22

User metadata
Rank l33t++
Rank
l33t++
pan069 wrote on 2022-06-13, 23:24:
Azarien wrote on 2022-06-13, 20:40:
The Serpent Rider wrote on 2022-06-13, 20:35:

15-bit is 5-bit per color without transparency. 16-bit includes transparency. Transparency is used for effects like window shadows.

That may be true for textures, but in the framebuffer 15bpp (32k colors) means 5 bits for each of red, green and blue with 1 bit alpha discarded, while 16bpp (64k colors) means 5 bits for red, 6 bits for green, and 5 bits for blue.

And the reason the green component in 16 bit color is 6 bits and not the blue or red is because the human eye is more sensitive to green.

In other fields, green is also often a substitute for monochrome information.
Examples: "Sync on Green" pin on monitors; the green frame in "frame sequential" slow-scan tv (7s/8s SSTV with individual frames for red/green/blue).
Here, the green frame has hues that match a pure monochrome/black white picture the most.

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 11 of 25, by rasz_pl

User metadata
Rank l33t
Rank
l33t
pan069 wrote on 2022-06-13, 20:38:

Pros and cons. 32-bit is faster to process since it aligns with memory locations whereas 24-bit does not. However, 24-bit takes up less memory (1 byte per pixel color less) but more procssing is required to transfer data.

Did any VGA manufacturer employ a clever trick of transparently translating 24-32bit accesses? Graphic ram is >=64bit anyway, so you could invisibly take care of packing 4 to 3 bytes on writes, 3 to 4 reads and any alignment issues _if you are certain this is a 24bit framebuffer access_. Then again I wonder if anyone reused wasted bytes for anything.

Open Source AT&T Globalyst/NCR/FIC 486-GAC-2 proprietary Cache Module reproduction

Reply 12 of 25, by Error 0x7CF

User metadata
Rank Member
Rank
Member
rasz_pl wrote on 2022-06-14, 21:13:
pan069 wrote on 2022-06-13, 20:38:

Pros and cons. 32-bit is faster to process since it aligns with memory locations whereas 24-bit does not. However, 24-bit takes up less memory (1 byte per pixel color less) but more procssing is required to transfer data.

Did any VGA manufacturer employ a clever trick of transparently translating 24-32bit accesses? Graphic ram is >=64bit anyway, so you could invisibly take care of packing 4 to 3 bytes on writes, 3 to 4 reads and any alignment issues _if you are certain this is a 24bit framebuffer access_. Then again I wonder if anyone reused wasted bytes for anything.

You'd need RAM 96 bits wide to not have any performance hit from misalignment, or some other multiple of 96 bits (3*4*8 bits, so it divides evenly to 3-byte and 4-byte pixel modes).

4 24-bit pixels, one byte per channel: RGB RGB RGB RGB

packed to 32-bit words:
RGBR GBRG BRGB

loading third pixel: load GBRG, load BRGB, combine and discard extra bytes

64 bit:
RGBRGBRG BRGB <- solves some issues (second pixel is now single-cycle) but third pixel still requires fixups

packed to 96 bits:
RGBRGBRGBRGB <- neither 32-bit nor 24-bit pixels require fixups (32-bit would be RGBARGBARGBA)

128 bits:
RGBRGBRGBRGBRGBR GBRGBRGBRGBRGBRG BRGBRGBRGBRGBRGB <- sixth and 11th pixels require multiple accesses and we're pretty much back to the same problems as 32-bit mode in general, only words 0, 3, 6, etc have proper alignment

Now, a graphics card might be able to waste less than 25% space if it had a longer word width like 128 bits, it'd only have to waste one byte per word to keep everything tidy. In particular that last R byte of the first 128-bit word could be forced to unused in hardware for 24-bit pixel modes, and then only 6.25% of memory would be wasted instead of 25%.

That would look like: RGBRGBRGBRGBRGBX RGBRGBRGBRGBRGBX

So, nice alignment, no fixing up any pixels, single loads for everything, and you always know where your R, G, and B components are in your 128 bit words since they're always in the same spots, there's not 3 different spots they could be in depending on address.

Old precedes antique.

Reply 13 of 25, by rasz_pl

User metadata
Rank l33t
Rank
l33t
Error 0x7CF wrote on 2022-06-16, 00:49:

You'd need RAM 96 bits wide to not have any performance hit from misalignment

The way I see it by the time we got to 24/32bit video modes (vlb/pci) everything was either 100% accelerated internally on Video card or system was copying whole framebuffer in one linear burst write. Worst case scenario is 100% random access pattern generating up to 2 vram CAS delays per transaction.
Wonder if anyone even experimented with it, strikes me as something 3Dfx would try.

Speaking of crazy 24-bit video hacks https://en.wikipedia.org/wiki/HP_Color_recovery 'near 24-bit' color look from an 8-bit framebuffer. Similar concept but taken to the extreme, why store 24 bits in 32 bits if you could store it in 8 😀

Open Source AT&T Globalyst/NCR/FIC 486-GAC-2 proprietary Cache Module reproduction

Reply 14 of 25, by Kahenraz

User metadata
Rank l33t
Rank
l33t

HP Color Recovery is very interesting. I couldn't find any color examples, but I did find a PDF with a grayscale one.

Screenshot_20220616-000734_Drive.jpg
Filename
Screenshot_20220616-000734_Drive.jpg
File size
153.37 KiB
Views
1708 views
File license
Public domain

Attachments

  • Filename
    apr95a6.pdf
    File size
    210.53 KiB
    Downloads
    42 downloads
    File license
    Public domain

Reply 15 of 25, by Matth79

User metadata
Rank Oldbie
Rank
Oldbie

800 x 600 could have fitted into 2MB as 24 or 32 bit, 1024 x 768 can't, the only real memory economy issue from way back is 640 x 480, which could fit in 1MB as 24 bit but not as 32 bit.
Another paradox from way back is that 256 colour was usually faster than 16, as each pixel was a whole byte so no 4 bit nibble shifting / masking was needed.
Support for 256 colour was confusing though, a program may be able to index the 236 spare colours, while others only gained the 4 extra fixed colours... hmm, lets see if I can remember / find them ... A rather nice off-white cream colour, a useful additional medium grey that added a step to greyscale for things which lacked colour indexing

Reply 16 of 25, by rasz_pl

User metadata
Rank l33t
Rank
l33t

Another another paradox of 256 color 13h mode is its Chain-4 internal organization due to the underlying EGA hardware compatibility. 13h linear write actually stores bytes in a "weird" every 4th byte order on the graphic card side Re: FastDoom. A new Doom port for DOS, optimized to be as fast as possible for 386/486 personal computers! Internally it still operates in 4 planes like in EGA 16 color modes, the difference is switching from chaining single bits in 16 color modes (4 bits per pixel) to chaining full bytes in 256 mode. Btw Mode X works by disabling this invisible address translation.

http://swag.outpostbbs.net/FAQ/0034.PAS.html wrote:
So, as you can see, the VGA memory consists of four bit planes of 64000 bytes each, just like the EGA. All four bit planes are m […]
Show full quote

So, as you can see, the VGA memory consists of four bit planes of 64000 bytes
each, just like the EGA. All four bit planes are mapped at adress A0000h. The
way the bit planes in VGA mode 13h are used differs from the EGA. In EGA modes
the bit planes are used to determine the value of the pixels (0-16). They are
(in EGA) organized as four 64000 bytes long bit chains (and not byte chains).
In VGA mode 13h they are organised as four byte chains. The four bit planes are
chained together and the pixels are spread over these bit planes. More
specific : the first pixel = pixel 0 (1 byte) is mapped in bit plane 0 at
offset 0, pixel 1 is mapped in bit plane 1 at offset 0, pixel 2 in bit plane
2 at offset 0, pixel 3 in bit plane 3 at offset 0, pixel 4 in bit plane 0 at
offset 1 and so on. So far for the VGA mode 13h.

This is why FastDoom runs faster in Vesa 320x200@256 than standard 13h mode despite both using exact same external linear buffer memory organization.

Open Source AT&T Globalyst/NCR/FIC 486-GAC-2 proprietary Cache Module reproduction

Reply 17 of 25, by eddman

User metadata
Rank Member
Rank
Member

Considering that 32-bit is still ~16 million like 24-bit, where does the statement that video cards do "internal 32-bit color processing" fit in?

For example, I've come across claims that cards like Voodoo3 actually process colors at 32-bit but then output them at 16-bit (which is then filleted to a higher color by the RAMDAC, but that's besides the point).

Do they mean to say 24-bit, ~16 million colors are involved, or something like actual 30-bit, 1 billion color processing? (and perhaps 2 unused bits?)

Last edited by eddman on 2023-03-06, 11:43. Edited 1 time in total.

Reply 18 of 25, by Scali

User metadata
Rank l33t
Rank
l33t
pan069 wrote on 2022-06-13, 20:38:

Pros and cons. 32-bit is faster to process since it aligns with memory locations whereas 24-bit does not. However, 24-bit takes up less memory (1 byte per pixel color less) but more procssing is required to transfer data.

Most notably the 'takes up less memory' part.
24-bit truecolour was chosen on most early truecolour SVGA cards, because they couldn't afford to 'waste' the extra 8 bits on each pixel.
Once memory became cheaper, 32-bit truecolour became more popular, and there was a transition time where many SVGA cards supported both 24 and 32-bit for maximum compatibility.

But yes, rule of thumb is: if the system supports 32-bit, and you have enough memory for it, 32-bit mode is usually faster than 24-bit.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 19 of 25, by Scali

User metadata
Rank l33t
Rank
l33t
eddman wrote on 2023-03-06, 10:24:

Considering that 32-bit is still ~16 million like 24-bit, where does the statement that video cards do "internal 32-bit color processing" fit in?

That is generally in the context of 3D acceleration, where you'd either have 16-bit or 32-bit modes (I don't know of any accelerator that uses the quirky 24-bit mode).
By claiming "internal 32-bit color processing" they mean that any kind of lighting or translucency operations are done with 32-bit precision, even if the final result is reduced to 16-bit before displaying on screen.
When using 16-bit, you have limited precision for blending colours and such, and generally this is done via dithering rather than mathematical blending.
So 32-bit internal processing should lead to better image quality. You get near 32-bit quality rendering output, while still requiring only a 16-bit framebuffer for the final image, so saving memory, allowing higher resolutions.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/