xjas wrote:There already exists a 3D accellerator for the 8088. It's called an 8087.
(* Yes, I KNOW most 3D back then was written with integer code. It was a wisecrack - settle down, pedants!)
A V20 (or V30) would be the real acceleator there. 😜 Or ... just swap out the whole motherboard for one of the 10 or 12 MHz turbo-mode V20 systems. (or 16 MHz, but I haven't seen any references to XT clones with ISA dividers -12 MHz was pushing it for a lot of cards and required the turbo switch enabled after boot, but 16 MHz was just too far out of spec) Or an 80188/186 for that matter, but the performance gain per-clock was similar. (both gained a ton in hardware multiply performance, great for 3D on paper, not so great for games already optimized around shifts and adds and lookup-table workarounds -plus the 286's multiply was still a lot faster)
Scali wrote:Only for actual 'GPUs' though, as in, video chips that also perform the T&L steps.
Early accelerators could only draw triangles, […]
Show full quote
ryoder wrote:That is the beauty of a 3d card. You upload the data and the card does the work for the cpu.
Only for actual 'GPUs' though, as in, video chips that also perform the T&L steps.
Early accelerators could only draw triangles, and still required the CPU to transform and light all triangles before sending the raw coordinates in screen-space to the 3d accelerator.
An 8088 system wouldn't be able to process a lot of geometry in realtime.
However, take a proper T&L-accelerated GPU, such as the GeForce256, then you upload your geometry once, and only have to send new matrices and other parameters to the card. Then the 8088 may have a fighting chance.
Downside is that they want their input in floating point coordinates, so you'd need to use the 8087, which won't exactly break any speed records.
There were a fair number of TMS34010 (or 34020) based accelerators around, many more 2D-oriented, but the 34010 itself was pretty fast with 3D math (by 286/386 standards at least, possibly even 486) on top of its blitter capabilities. (you could effectively have the entire game offloaded to the GPU/video coprocessor board and have the CPU mostly handle system I/O)
TMS34010 based accelerators were almost common enough for that to have maybe been a real-world consideration with game developers, but given the lack of support for even more common DOS/Windows accelerator systems (beyond VGA's very basic feature set) like various IBM 8514 clones/compatibles (ATI's Mach series among others) that got no game support, it would have been asking a bit much for developers even in the early 90s when lower-end variants of such boards became reasonably affordable. (Wiki references '8514 clones' commonly implemented using the TMS34010, but that doesn't make sense to me unless there was a higher level abstration layer standard with that hardware that made fixed-function and more programmable implementations equally usable ... or the 34010 systems happened to be 8514 compatible but with added functionality of the coprocessor)
A REALLY 3D-oriented coprocessor board probably would've been a DSP mated to a GPU (like a TMS 34010 + TMS32010 -looking at the bottom-end models around back then) combined with a VGA/SVGA compatible display controller would have been among the earliest reasonably powerful consumer-accessible 3D accelerator arrangements that might have been had TI choses to target that market. (albeit making the DSP optional and the basic 34010 the lowest common denominator seems more realistic for formulating a baseline standard likely to be adopted in the consumer market -TI supposedly did try to get home game console companies to adopt that coprocessor, so aiming at the home computer game market wouldn't be far off) Hell, unlike first gen consumer polygon rasterizers like the Matrox Millenium, a graphics-oriented MPU like that could avoid the flat/smooth shaded polygon limit and do various combinations of 2D and 3D operations with and without textures in 256 colors and take advantage of look-up-table workarounds for paletized colorspace. (though pushing for a 320x200x16bpp basic common standard would be pretty nice for 3D and the 34010's arbitrary bit-depth handling could allow manipullation of 5-6-5 RGB values pretty well for shading and blending effects possibly faster than 8bpp with lookup table operations; 5-5-5 would work too, but if you're using an 18-bit DAC for VGA compliance, you might as well fo 5-6-5)
From what I understand, Ati's Mach accelerator cores were more fixed-function blitters, not actual coprocessors (independent MPUs) like the 34010, and wouldn't have encouraged that sort of programming. (though could have been good for accelerating many types of 2D games along with solid-filled polygon type games -using blitter commands to to polygon line-fills; hardware support for scaling and rotation of blitter objects could be exploited into affine texture rendering too) If ATi went as far as to engineer a 34010 instruction compatible MPU, that changes things a bit, but I don't think this is the case.
Sticking a minimalistic TMS-34010 board in an XT clone would be a bit like what the SuperFX chip did for the SNES ... except the video board would probably have a hell of a lot more RAM (granted, those cart based games had 512k to 2 MB of ROM to work from on top of the little 32k SRAM buffer used for rendering -and something like 1 kB of on-chip scratchpad). Though the 320x200x16bpp idea pushed it closer to what Sega tried with their 32x, but older (or cheaper) and a lot slower than the SH2s (and very basic line-fill acceleration for the 'super VDP' )
If the accelerator board was indeed doing most of the work (and game processing too), a narrow 8-bit ISA host connection wouldn't totally ruin things, even if 16-bit would be preferred. (granted, by the time such boards could have been made cheap enough to really get aggressively priced -ie as cheap or cheaper than typical early 90s SVGA cards- you'd really be working with 286s as the lowest common denominator anyway, like Wing Commander II and Wolfenstein 3D era stuff catering to the 512k/640k real-mode limit -probably somewhat akin to the games that came on CD-ROM yet were 286/realmode compatible ... mix of next-gen and old hardware support)
A lot of software rendered PC games started using massive amounts of RAM (for the time) as a workaround for raw computational resource, with precalculated tables and animation used where realtime stuff would have been too slow (or possible in a pinch, but not worth catering to limited RAM configurations). Accelerated graphics subsystems more capable of realtime operations could do more with less RAM. (obviously on the system memory end, but I mean their own local video/coprocessor memory as well)