First post, by PentAmd
Some background
Around 1995 as the 1st Voodoo card came out, the memory was so expensive, there was no other option, only to keep the texture buffer low, that means Pallatized textures and maximum 16 bit pixels. The texture patterns on a wall were recuring.
Some tech: Doing alpha blending on a texture (like window or fences), in 16bit means 4bit alpha, 4 bit red, 4 bit green, 4bit blue. Another possibility to reduce alpha to 1bit (on/off), and the rest 15bit can be used for r-g-b colors.
This trend continued, until Voodoo3 came out. The Voodoo3 still had a 16bit output for 3D, with some filter they made the output color in 22bit. (note: Interesting thing, that the 2D was able to output 24bit, so the RAMDAC inside the voodoo chip has technically everything. Only the 3D was not that capable)
I read the "Questions&Answer" on a voodoo website, and I would like to mark this sentence:
Q: "Why doesn't Voodoo3 support 32b rendering, or large textures, or 32b textures?"
A: ".....As for image quality, we’ve gone to great lengths to make games look great in 16bpp mode. We actually do the rendering calculations internally at 32 bits to have full precision with the 16-bit operands. Then, instead of simply truncating the results to 16 bits for saving in the frame buffer we use a proprietary filtering algorithm that retains nearly the full precision of the color value. The result is something that rivals ANY full 32-bit rendering, only it goes a lot faster...."
Please note the underline part above.
In the Voodoo3 refenece manual page 22 (bottom):
"Color Modes: Avenger supports 16-bit RGB (5-6-5) buffer displays only . Internally , Avenger graphics
utilizes a 32-bit ARGB 3Dpixel pipeline for maximum precision, but the 24-bit internal RGB color is
dithered to 16-bit RGB before being stored in the color buffers. The host may also transfer 24-bit RGB
pixels to Avenger using 3D linear frame buffer accesses, and color ditheringis utilized to convert the
input pixels to native 16-bit format with no performance penalty ."
Page 23 (Top)
"Color Dithering Operations: All operations internal to Avenger operate in native 32-bit ARGB pixel
mode. However, color dithering from the 24-bit RGB pixels to 16-bit RGB (5-6-5) pixels is provided on
the back end of the pixel pipeline. Using the color dithering option, the host can pass 24-bit RGB pixels
to Avenger, which converts the incoming 24-bit RGB pixels to 16-bit RGB (5-6-5) pixels which are then
stored in the 16-bit RGB buffer. The 16-bit color dithering allows for the generation of photorealistic
images without the additional cost of a true color frame buffer storage area. "
I have a Voodoo2. During the experimenting with the glide3 sample programs, I tried to load a argb8888 (32bit) texture into the card memory, by using the "grTexDownloadMipMap" function, but the program crashed. After some investigation I found that the "tlLoadTexture" was failed before the other command. Deep inspecting the Voodoo2 Glide3 source code (cvg branch), I found that the following color modes are accepted:
CfTableEntry cftable[] =
{
{ "I8", GR_TEXFMT_INTENSITY_8, FXTRUE },
{ "A8", GR_TEXFMT_ALPHA_8, FXTRUE },
{ "AI44", GR_TEXFMT_ALPHA_INTENSITY_44, FXTRUE },
{ "YIQ", GR_TEXFMT_YIQ_422, FXTRUE },
{ "RGB332", GR_TEXFMT_RGB_332, FXTRUE },
{ "RGB565", GR_TEXFMT_RGB_565, FXTRUE },
{ "ARGB8332", GR_TEXFMT_ARGB_8332, FXTRUE },
{ "ARGB1555", GR_TEXFMT_ARGB_1555, FXTRUE },
{ "AYIQ8422", GR_TEXFMT_AYIQ_8422, FXTRUE },
{ "ARGB4444", GR_TEXFMT_ARGB_4444, FXTRUE },
{ "AI88", GR_TEXFMT_ALPHA_INTENSITY_88, FXTRUE },
{ "P8", GR_TEXFMT_P_8, FXTRUE },
{ "AP88", GR_TEXFMT_AP_88, FXTRUE },
{ 0, 0, FXFALSE }
};
All of them are 8 or 16 bit. I was hoping, that voodoo2 can accept 32bit textures, and the glide transforms it to 16bit with ditherig, but it does not do that. The problem is, the Voodoo2 cannot use the higher glide3 versions, it reports error immediately. So let's skip it.
I thought I am going to check the next generation, the Voodoo3, so I went the Glide3 source again for "H3" branch which is for the Voodoo3, and color table seemed the same:
CfTableEntry cftable[] =
{
{ "I8", GR_TEXFMT_INTENSITY_8, FXTRUE },
{ "A8", GR_TEXFMT_ALPHA_8, FXTRUE },
{ "AI44", GR_TEXFMT_ALPHA_INTENSITY_44, FXTRUE },
{ "YIQ", GR_TEXFMT_YIQ_422, FXTRUE },
{ "RGB332", GR_TEXFMT_RGB_332, FXTRUE },
{ "RGB565", GR_TEXFMT_RGB_565, FXTRUE },
{ "ARGB8332", GR_TEXFMT_ARGB_8332, FXTRUE },
{ "ARGB1555", GR_TEXFMT_ARGB_1555, FXTRUE },
{ "AYIQ8422", GR_TEXFMT_AYIQ_8422, FXTRUE },
{ "ARGB4444", GR_TEXFMT_ARGB_4444, FXTRUE },
{ "AI88", GR_TEXFMT_ALPHA_INTENSITY_88, FXTRUE },
{ "P8", GR_TEXFMT_P_8, FXTRUE },
{ "AP88", GR_TEXFMT_AP_88, FXTRUE },
{ 0, 0, FXFALSE }
};
So I went to the "H5" Voodoo 4/5 branch, and bingo, I found the 32bit support:
CfTableEntry cftable[] =
{
{ "I8", GR_TEXFMT_INTENSITY_8, FXTRUE },
{ "A8", GR_TEXFMT_ALPHA_8, FXTRUE },
{ "AI44", GR_TEXFMT_ALPHA_INTENSITY_44, FXTRUE },
{ "YIQ", GR_TEXFMT_YIQ_422, FXTRUE },
{ "RGB332", GR_TEXFMT_RGB_332, FXTRUE },
{ "RGB565", GR_TEXFMT_RGB_565, FXTRUE },
{ "ARGB8332", GR_TEXFMT_ARGB_8332, FXTRUE },
{ "ARGB1555", GR_TEXFMT_ARGB_1555, FXTRUE },
{ "AYIQ8422", GR_TEXFMT_AYIQ_8422, FXTRUE },
{ "ARGB4444", GR_TEXFMT_ARGB_4444, FXTRUE },
{ "AI88", GR_TEXFMT_ALPHA_INTENSITY_88, FXTRUE },
{ "P8", GR_TEXFMT_P_8, FXTRUE },
{ "AP88", GR_TEXFMT_AP_88, FXTRUE },
{ "ARGB8888", GR_TEXFMT_ARGB_8888, FXTRUE },
#ifdef FX_GLIDE_NAPALM
/* other texture formats. */
{ "FXT1", GR_TEXFMT_ARGB_CMP_FXT1, FXTRUE },
{ "FXT1_HI", GR_TEXFMT_ARGB_CMP_FXT1, FXTRUE },
{ "FXT1_MIXED", GR_TEXFMT_ARGB_CMP_FXT1, FXTRUE },
{ "FXT1_CHROMA", GR_TEXFMT_ARGB_CMP_FXT1, FXTRUE },
{ "FXT1_ALPHA", GR_TEXFMT_ARGB_CMP_FXT1, FXTRUE },
{ "P6666", GR_TEXFMT_P_8_6666, FXTRUE },
/*{ "RSVD1", GR_TEXFMT_RSVD1, FXTRUE },
{ "RSVD2", GR_TEXFMT_RSVD2, FXTRUE },
{ "RSVD4", GR_TEXFMT_RSVD4, FXTRUE },*/
{ "YUYV422", GR_TEXFMT_YUYV_422, FXTRUE },
{ "UYVY22", GR_TEXFMT_UYVY_422, FXTRUE },
{ "AYUV444", GR_TEXFMT_AYUV_444, FXTRUE },
/* TODO: to support DXTn, we need to read .dds files.
{ "DXT1", GR_TEXFMT_ARGB_CMP_DXT1, FXTRUE },
{ "DXT2", GR_TEXFMT_ARGB_CMP_DXT2, FXTRUE },
{ "DXT3", GR_TEXFMT_ARGB_CMP_DXT3, FXTRUE },
{ "DXT4", GR_TEXFMT_ARGB_CMP_DXT4, FXTRUE },
{ "DXT5", GR_TEXFMT_ARGB_CMP_DXT5, FXTRUE },*/
#endif
There is a "#ifdef" in the middle, with the word "napalm", that means the V4/5 is only able to use the modes at the bottom.
Anyway in this code I was able to found the "Read32Bit" function, which can read out the 32-bit textures from file.
So the Voodoo4/5 with the latest Glide3 definitive can read the files, but we already know that.
The question is if Voodoo3 (maybe Banshee) can at least OPEN 32bit textures using the Glide3.
Maybe the above "#ifdef" statment means, that the Glide "H5" branch is not only for Voodoo 4/5. Maybe this is a new Glide version for every card downto Voodoo3.
There was a comment in the header about the 32-bit support:
** 6 6/14/99 5:16p Larryw
** Added 32-bit texture format support.
In 14-th June 1999 was the 32bit support added. According to google, Voodoo 3 came out 3rd of April 1999.
I think the 32bit support was meant for the comming Voodoo4/5, and all the above marketing shit about the internal 32bit or 24bit calculation is a lie. I can imagine that some registers have additional bits for some short calculation, but the 3d frame buffer uses only 16 bit.
So question to the community: Do you have some experiences about the "internal 24/32 bit" rendering of the Voodoo3 card?
Recently as I studying the Glide demo codes, on my laptop I am using the nGlide (simulator), and nGlide is probably based on the latest Voodoo5 glide3 (H5), so that's why the code which I wrote, working flawlessly on the nGlide, but have problems on the real system.
To be continued..
Please share your thoughts!