VOGONS


8 bit 3D card

Topic actions

Reply 20 of 33, by matze79

User metadata
Rank l33t
Rank
l33t

There are already such cards, for pretty old flight simulators based on 8086.
They consist of a special CPU, extra Memory simply another SBC plugged into a ISA Slot.
Asymmetric Multi Processing.

https://www.retrokits.de - blog, retro projects, hdd clicker, diy soundcards etc
https://www.retroianer.de - german retro computer board

Reply 21 of 33, by snorg

User metadata
Rank Oldbie
Rank
Oldbie

Here is the card I was thinking of, apparently there is one on Ebay although at $2800 I don't think it is going anywhere anytime soon:

https://en.wikipedia.org/wiki/IrisVision

http://www.ebay.com/itm/IV-AT-24Z-SILICON-GRA … 5oAAOSwL7VWmbnv

I imagine that if you could somehow design a 3D card around the limitations of an 8 bit bus, then yes, you could have 3D graphics in an XT class system. But you'd have to design it yourself, since (to my knowledge) there was never any such beast made.

I never knew the above card existed, if SGI had seriously explored making PC add-in cards they might still be with us. This is clearly a 16 bit card, though, and not suitable for an XT class system.

Reply 22 of 33, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++

That IrisGL card requires 32 bit instructions. 386 or better, please!

Here's some examples of state of the art CAD from the late 70s and 80s:
http://www.plmworld.org/index.php?mo=cm&op=ld&fid=239

Have a look at the output of the workstation from 85. This would be jaw dropping for an XT.

All hail the Great Capacitor Brand Finder

Reply 23 of 33, by megatron-uk

User metadata
Rank Oldbie
Rank
Oldbie
leileilol wrote:

is there any LGA 1150 mobo with ISA hidden out there... like in the industrial market or something?

Last with fully working ISA slots (DMA, bus mastering etc) was socket 478/Pentium 4 (845, 865 and rumours of 875 chipset). However, you can get ISA support on Intel Q77 (3rd gen i3/i5/i7) if you can live without DMA and a few other pecularities:

http://www.ibase.com.tw/english/ProductDetail … Computing/MB970

My collection database and technical wiki:
https://www.target-earth.net

Reply 24 of 33, by snorg

User metadata
Rank Oldbie
Rank
Oldbie

Would not have thought they would have a system capable of 3072x2304...that's 4k resolution more or less, isn't it? That's insane for mid 1980s.

I suspect you could probably implement something like a 512x512 or 512x400 windowed 3D working with the system's VGA board, how exactly you'd do this with an 8 bit bus I'm not sure. You'd have to design all the hardware and write the software drivers in order to make any use of it, and the rest of the system sure wouldn't be much of a screamer.

It might be an interesting technical challenge for someone with the right kind of skills but given you'd wait weeks for an XT class system to render a simple animation I'm not sure having an interactive 3D graphics subsystem makes a ton of sense.

Reply 25 of 33, by bhtooefr

User metadata
Rank Newbie
Rank
Newbie

The comments about anti-aliasing firmware are making me think it's probably 1024x768 on the display, but with 3x FSAA.

Reply 26 of 33, by snorg

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
Only for actual 'GPUs' though, as in, video chips that also perform the T&L steps. Early accelerators could only draw triangles, […]
Show full quote
ryoder wrote:

That is the beauty of a 3d card. You upload the data and the card does the work for the cpu.

Only for actual 'GPUs' though, as in, video chips that also perform the T&L steps.
Early accelerators could only draw triangles, and still required the CPU to transform and light all triangles before sending the raw coordinates in screen-space to the 3d accelerator.
An 8088 system wouldn't be able to process a lot of geometry in realtime.
However, take a proper T&L-accelerated GPU, such as the GeForce256, then you upload your geometry once, and only have to send new matrices and other parameters to the card. Then the 8088 may have a fighting chance.
Downside is that they want their input in floating point coordinates, so you'd need to use the 8087, which won't exactly break any speed records.

I found this over on OpenCores:

http://opencores.org/project,wf3d

It only does wireframe 3D, though. If you could put one of those on an FPGA board, along with an 8087 or 80287 FPU clocked super high (80287 for the XT class 286 systems) maybe this would be possible? At least in wireframe? I'm not sure how you'd go about interfacing the FPGA with the ISA bus, though. At that point, it might be easier to do the whole XT class system in an FPGA along with the 3D board and a souped-up FPU to feed it. You could have a 8088 or NEC v30 running at 20mhz that way. But at that point you're no longer talking about an add-in board but instead talking about a hardware simulation.

Reply 28 of 33, by Azarien

User metadata
Rank Oldbie
Rank
Oldbie
megatron-uk wrote:

Last with fully working ISA slots (DMA, bus mastering etc) was socket 478/Pentium 4 (845, 865 and rumours of 875 chipset). However, you can get ISA support on Intel Q77 (3rd gen i3/i5/i7) if you can live without DMA and a few other pecularities:

http://www.ibase.com.tw/english/ProductDetail … Computing/MB970

Are you sure about that non-functional DMA? I don't see any "ISA slot just for show" fine print on their page nor in the manual.

i7 with ISA slots.

I'd really like to know if classic Sound Blasters work on them...

Reply 29 of 33, by mrau

User metadata
Rank Oldbie
Rank
Oldbie

so you need an at bus card, maybe working on full isa when available, that understands fixed point numbers and consists mainly of unified shaders as these use little bandwidth ;p easy

Reply 30 of 33, by bhtooefr

User metadata
Rank Newbie
Rank
Newbie
Azarien wrote:

Are you sure about that non-functional DMA? I don't see any "ISA slot just for show" fine print on their page nor in the manual.

Most industrial ISA cards don't need DMA, they're just using port I/O, I believe.

Reply 31 of 33, by kool kitty89

User metadata
Rank Member
Rank
Member
xjas wrote:

There already exists a 3D accellerator for the 8088. It's called an 8087.

(* Yes, I KNOW most 3D back then was written with integer code. It was a wisecrack - settle down, pedants!)

A V20 (or V30) would be the real acceleator there. 😜 Or ... just swap out the whole motherboard for one of the 10 or 12 MHz turbo-mode V20 systems. (or 16 MHz, but I haven't seen any references to XT clones with ISA dividers -12 MHz was pushing it for a lot of cards and required the turbo switch enabled after boot, but 16 MHz was just too far out of spec) Or an 80188/186 for that matter, but the performance gain per-clock was similar. (both gained a ton in hardware multiply performance, great for 3D on paper, not so great for games already optimized around shifts and adds and lookup-table workarounds -plus the 286's multiply was still a lot faster)

Scali wrote:
Only for actual 'GPUs' though, as in, video chips that also perform the T&L steps. Early accelerators could only draw triangles, […]
Show full quote
ryoder wrote:

That is the beauty of a 3d card. You upload the data and the card does the work for the cpu.

Only for actual 'GPUs' though, as in, video chips that also perform the T&L steps.
Early accelerators could only draw triangles, and still required the CPU to transform and light all triangles before sending the raw coordinates in screen-space to the 3d accelerator.
An 8088 system wouldn't be able to process a lot of geometry in realtime.
However, take a proper T&L-accelerated GPU, such as the GeForce256, then you upload your geometry once, and only have to send new matrices and other parameters to the card. Then the 8088 may have a fighting chance.
Downside is that they want their input in floating point coordinates, so you'd need to use the 8087, which won't exactly break any speed records.

There were a fair number of TMS34010 (or 34020) based accelerators around, many more 2D-oriented, but the 34010 itself was pretty fast with 3D math (by 286/386 standards at least, possibly even 486) on top of its blitter capabilities. (you could effectively have the entire game offloaded to the GPU/video coprocessor board and have the CPU mostly handle system I/O)

TMS34010 based accelerators were almost common enough for that to have maybe been a real-world consideration with game developers, but given the lack of support for even more common DOS/Windows accelerator systems (beyond VGA's very basic feature set) like various IBM 8514 clones/compatibles (ATI's Mach series among others) that got no game support, it would have been asking a bit much for developers even in the early 90s when lower-end variants of such boards became reasonably affordable. (Wiki references '8514 clones' commonly implemented using the TMS34010, but that doesn't make sense to me unless there was a higher level abstration layer standard with that hardware that made fixed-function and more programmable implementations equally usable ... or the 34010 systems happened to be 8514 compatible but with added functionality of the coprocessor)

A REALLY 3D-oriented coprocessor board probably would've been a DSP mated to a GPU (like a TMS 34010 + TMS32010 -looking at the bottom-end models around back then) combined with a VGA/SVGA compatible display controller would have been among the earliest reasonably powerful consumer-accessible 3D accelerator arrangements that might have been had TI choses to target that market. (albeit making the DSP optional and the basic 34010 the lowest common denominator seems more realistic for formulating a baseline standard likely to be adopted in the consumer market -TI supposedly did try to get home game console companies to adopt that coprocessor, so aiming at the home computer game market wouldn't be far off) Hell, unlike first gen consumer polygon rasterizers like the Matrox Millenium, a graphics-oriented MPU like that could avoid the flat/smooth shaded polygon limit and do various combinations of 2D and 3D operations with and without textures in 256 colors and take advantage of look-up-table workarounds for paletized colorspace. (though pushing for a 320x200x16bpp basic common standard would be pretty nice for 3D and the 34010's arbitrary bit-depth handling could allow manipullation of 5-6-5 RGB values pretty well for shading and blending effects possibly faster than 8bpp with lookup table operations; 5-5-5 would work too, but if you're using an 18-bit DAC for VGA compliance, you might as well fo 5-6-5)

From what I understand, Ati's Mach accelerator cores were more fixed-function blitters, not actual coprocessors (independent MPUs) like the 34010, and wouldn't have encouraged that sort of programming. (though could have been good for accelerating many types of 2D games along with solid-filled polygon type games -using blitter commands to to polygon line-fills; hardware support for scaling and rotation of blitter objects could be exploited into affine texture rendering too) If ATi went as far as to engineer a 34010 instruction compatible MPU, that changes things a bit, but I don't think this is the case.

Sticking a minimalistic TMS-34010 board in an XT clone would be a bit like what the SuperFX chip did for the SNES ... except the video board would probably have a hell of a lot more RAM (granted, those cart based games had 512k to 2 MB of ROM to work from on top of the little 32k SRAM buffer used for rendering -and something like 1 kB of on-chip scratchpad). Though the 320x200x16bpp idea pushed it closer to what Sega tried with their 32x, but older (or cheaper) and a lot slower than the SH2s (and very basic line-fill acceleration for the 'super VDP' )

If the accelerator board was indeed doing most of the work (and game processing too), a narrow 8-bit ISA host connection wouldn't totally ruin things, even if 16-bit would be preferred. (granted, by the time such boards could have been made cheap enough to really get aggressively priced -ie as cheap or cheaper than typical early 90s SVGA cards- you'd really be working with 286s as the lowest common denominator anyway, like Wing Commander II and Wolfenstein 3D era stuff catering to the 512k/640k real-mode limit -probably somewhat akin to the games that came on CD-ROM yet were 286/realmode compatible ... mix of next-gen and old hardware support)

A lot of software rendered PC games started using massive amounts of RAM (for the time) as a workaround for raw computational resource, with precalculated tables and animation used where realtime stuff would have been too slow (or possible in a pinch, but not worth catering to limited RAM configurations). Accelerated graphics subsystems more capable of realtime operations could do more with less RAM. (obviously on the system memory end, but I mean their own local video/coprocessor memory as well)

Reply 32 of 33, by snorg

User metadata
Rank Oldbie
Rank
Oldbie
kool kitty89 wrote:
A V20 (or V30) would be the real acceleator there. :P Or ... just swap out the whole motherboard for one of the 10 or 12 MHz tur […]
Show full quote
xjas wrote:

There already exists a 3D accellerator for the 8088. It's called an 8087.

(* Yes, I KNOW most 3D back then was written with integer code. It was a wisecrack - settle down, pedants!)

A V20 (or V30) would be the real acceleator there. 😜 Or ... just swap out the whole motherboard for one of the 10 or 12 MHz turbo-mode V20 systems. (or 16 MHz, but I haven't seen any references to XT clones with ISA dividers -12 MHz was pushing it for a lot of cards and required the turbo switch enabled after boot, but 16 MHz was just too far out of spec) Or an 80188/186 for that matter, but the performance gain per-clock was similar. (both gained a ton in hardware multiply performance, great for 3D on paper, not so great for games already optimized around shifts and adds and lookup-table workarounds -plus the 286's multiply was still a lot faster)

Scali wrote:
Only for actual 'GPUs' though, as in, video chips that also perform the T&L steps. Early accelerators could only draw triangles, […]
Show full quote
ryoder wrote:

That is the beauty of a 3d card. You upload the data and the card does the work for the cpu.

Only for actual 'GPUs' though, as in, video chips that also perform the T&L steps.
Early accelerators could only draw triangles, and still required the CPU to transform and light all triangles before sending the raw coordinates in screen-space to the 3d accelerator.
An 8088 system wouldn't be able to process a lot of geometry in realtime.
However, take a proper T&L-accelerated GPU, such as the GeForce256, then you upload your geometry once, and only have to send new matrices and other parameters to the card. Then the 8088 may have a fighting chance.
Downside is that they want their input in floating point coordinates, so you'd need to use the 8087, which won't exactly break any speed records.

There were a fair number of TMS34010 (or 34020) based accelerators around, many more 2D-oriented, but the 34010 itself was pretty fast with 3D math (by 286/386 standards at least, possibly even 486) on top of its blitter capabilities. (you could effectively have the entire game offloaded to the GPU/video coprocessor board and have the CPU mostly handle system I/O)

TMS34010 based accelerators were almost common enough for that to have maybe been a real-world consideration with game developers, but given the lack of support for even more common DOS/Windows accelerator systems (beyond VGA's very basic feature set) like various IBM 8514 clones/compatibles (ATI's Mach series among others) that got no game support, it would have been asking a bit much for developers even in the early 90s when lower-end variants of such boards became reasonably affordable. (Wiki references '8514 clones' commonly implemented using the TMS34010, but that doesn't make sense to me unless there was a higher level abstration layer standard with that hardware that made fixed-function and more programmable implementations equally usable ... or the 34010 systems happened to be 8514 compatible but with added functionality of the coprocessor)

A REALLY 3D-oriented coprocessor board probably would've been a DSP mated to a GPU (like a TMS 34010 + TMS32010 -looking at the bottom-end models around back then) combined with a VGA/SVGA compatible display controller would have been among the earliest reasonably powerful consumer-accessible 3D accelerator arrangements that might have been had TI choses to target that market. (albeit making the DSP optional and the basic 34010 the lowest common denominator seems more realistic for formulating a baseline standard likely to be adopted in the consumer market -TI supposedly did try to get home game console companies to adopt that coprocessor, so aiming at the home computer game market wouldn't be far off) Hell, unlike first gen consumer polygon rasterizers like the Matrox Millenium, a graphics-oriented MPU like that could avoid the flat/smooth shaded polygon limit and do various combinations of 2D and 3D operations with and without textures in 256 colors and take advantage of look-up-table workarounds for paletized colorspace. (though pushing for a 320x200x16bpp basic common standard would be pretty nice for 3D and the 34010's arbitrary bit-depth handling could allow manipullation of 5-6-5 RGB values pretty well for shading and blending effects possibly faster than 8bpp with lookup table operations; 5-5-5 would work too, but if you're using an 18-bit DAC for VGA compliance, you might as well fo 5-6-5)

From what I understand, Ati's Mach accelerator cores were more fixed-function blitters, not actual coprocessors (independent MPUs) like the 34010, and wouldn't have encouraged that sort of programming. (though could have been good for accelerating many types of 2D games along with solid-filled polygon type games -using blitter commands to to polygon line-fills; hardware support for scaling and rotation of blitter objects could be exploited into affine texture rendering too) If ATi went as far as to engineer a 34010 instruction compatible MPU, that changes things a bit, but I don't think this is the case.

Sticking a minimalistic TMS-34010 board in an XT clone would be a bit like what the SuperFX chip did for the SNES ... except the video board would probably have a hell of a lot more RAM (granted, those cart based games had 512k to 2 MB of ROM to work from on top of the little 32k SRAM buffer used for rendering -and something like 1 kB of on-chip scratchpad). Though the 320x200x16bpp idea pushed it closer to what Sega tried with their 32x, but older (or cheaper) and a lot slower than the SH2s (and very basic line-fill acceleration for the 'super VDP' )

If the accelerator board was indeed doing most of the work (and game processing too), a narrow 8-bit ISA host connection wouldn't totally ruin things, even if 16-bit would be preferred. (granted, by the time such boards could have been made cheap enough to really get aggressively priced -ie as cheap or cheaper than typical early 90s SVGA cards- you'd really be working with 286s as the lowest common denominator anyway, like Wing Commander II and Wolfenstein 3D era stuff catering to the 512k/640k real-mode limit -probably somewhat akin to the games that came on CD-ROM yet were 286/realmode compatible ... mix of next-gen and old hardware support)

A lot of software rendered PC games started using massive amounts of RAM (for the time) as a workaround for raw computational resource, with precalculated tables and animation used where realtime stuff would have been too slow (or possible in a pinch, but not worth catering to limited RAM configurations). Accelerated graphics subsystems more capable of realtime operations could do more with less RAM. (obviously on the system memory end, but I mean their own local video/coprocessor memory as well)

Sorry to necro this thread, but after seeing this over on Hackaday, it got me wondering if there would be any way to interface a Raspberry Pi Zero with an 8bit machine to do OpenGL. I know other folks have used the GPIO to talk to the serial port on retro machines, I think Cloudschatze might even have rigged up something like that on one of his Tandy's a while back. Would the GPIO on a Pi Zero be enough pins to output data over the 8bit bus? And how would you talk to it?

Here is the link, I think what this person did is a lot simpler than what I'm thinking of, it looks like he (or she) is using it more like a terminal emulator: https://hackaday.io/project/9567-5-graphics-c … or-homebrew-z80

Reply 33 of 33, by 386SX

User metadata
Rank l33t
Rank
l33t

It remembered me the SuperFX chip for the SNES cpu. But as said without any games running specifically for it any hardware would be useless.
Times ago I read a project of someone building a homemade cartridge with an external cpu on Nintendo GB Color and rewrite the code to run Wolf3D on it. Considering the main cpu being a 8Mhz Z80 if I remember correctly...