VOGONS


Reply 20 of 48, by megatron-uk

User metadata
Rank Oldbie
Rank
Oldbie

Are any of those examples really vector art or just simply basic line art graphics? I have trouble believing any of those use any form of scaling or trig functions to pre-render or output to the screen.

My collection database and technical wiki:
https://www.target-earth.net

Reply 21 of 48, by jakethompson1

User metadata
Rank Oldbie
Rank
Oldbie

Even with unaccelerated VGA there was fancy code to make operations like BitBlt fast. It involved generating machine code on the fly and putting it on the stack and then executing it, sort of like self-modifying code: https://devblogs.microsoft.com/oldnewthing/20 … 209-00/?p=97995

An accelerated card, instead of doing an operation like BitBlt on the CPU, would instead pass it to the graphics chip to execute in hardware.

IIRC from Vista onward all 2D acceleration capabilities like this that the card may have are bypassed; is that right?

Reply 22 of 48, by serialShinobi

User metadata
Rank Newbie
Rank
Newbie

Thanks. I can see everyone who replied is very interested in graphics history. I recently decided to build a 486 because of how much support I knew I could get from the large community that has been the IBM PC clone industry. I now realize that there was a driver for the s3 chipset and Windows 3.1. Now I know why so many sources refer to the first (PC) accelerators as "GUI Accelerators" or there is mention of speeding up drawing of windows based GUI but not games in DOS. There was no driver in DOS for the s3 911. And these engineers from silicon valley that founded s3 did not try to "reinvent the wheel" by reinventing the IBM PC. They simply used an established design, deploying it at a time when it was feasible to shrink down the thousands of dollars in hardware that was the monolithic IBM 8514 to a small $500 card. I saw the guy who demoed the IBM 8514 API included in Turbo C. I saw the capabilities of the Mindset PC. Those were amazing things to behold considering the historical time they occurred. I also read about the way people had to throw the whole graphics card into displaying graphics in MS Windows. It is too bad that IBM's graphics API wasn't as popular as MS Windows. If only people had the power back then to choose the vendor of software like they do today. It's good that ATI was not like wise the winner of the 3D Accelerator war during the reign of 3dfx voodoo 1. Can you imagine GL Quake by ID Software and being locked into the software renderer because ATI didn't support OpenGL?

Reply 23 of 48, by Jo22

User metadata
Rank l33t++
Rank
l33t++

You're welcome. Thanks for opening this thread, too. 🙂

Edit: There's a demo scene production especially made for the early S3 accelerators.
It works in DOSBox, so I assume it's S3 Trio friendly, maybe S3 Vision friendly, too.
Not sure if it still works with S3 ViRGE.

https://www.pouet.net/prod.php?which=1524

megatron-uk wrote on 2023-03-20, 18:22:

Are any of those examples really vector art or just simply basic line art graphics? I have trouble believing any of those use any form of scaling or trig functions to pre-render or output to the screen.

I'm not entirely sure, maybe checking the game's resources can help finding out.. 🤷‍♂️

Maybe they're similar to AutoDesk DXF, not sure.

But even if the games use BASIC drawing commands (lines, circles, boxes, fill), acceleration might be possible.
I mean, it works for QB45, at least. Even if it's not super efficient, maybe.

That rare Mindset PC from the early 80s ran all its graphics demos on a modified GW-BASIC (AFAIK).

https://youtu.be/3a_qJFD80_c?t=553

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 24 of 48, by och

User metadata
Rank Newbie
Rank
Newbie

From what I understand, the PC was never designed with a dedicated graphic subsystem in mind, so until 3D accelerators there wasn't much development in the 2D acceleration. And modern GPUs are no longer even providing any sort of Windows GUI acceleration, which is a shame.

However, if you're interested in the topic, read up on consoles and arcade machines, especially Sega arcade systems - they had graphics systems with very clever 2D acceleration before they went full 3D.

Reply 25 of 48, by wbahnassi

User metadata
Rank Member
Rank
Member
Jo22 wrote on 2023-03-19, 08:29:

If only DOS developers weren't so lazy and dumb.
Some compilers like QuickBasic 4.5 automatically used an available FPU.

Maybe for 3D games, but for 2D games I'd argue it's the reverse. Target the common base with whatever time you have to finish the game. Give the players gameplay value by adding game features with the time you'd spend rewriting graphics routines to utilize FP math when your game can already display all it wants without FP math. When fractions are needed, fixed-point math was always possible and cheap and is more predictable than floating-point for game code use cases.

Reply 27 of 48, by Gmlb256

User metadata
Rank l33t
Rank
l33t

MMX is a general-purpose SIMD instruction set and software need to have explicit support for it. Some games such Rebel Moon Rising, Extreme Assault and POD used it for software rendering, but performance pales in comparison to real 3D hardware accelerators.

VIA C3 Nehemiah 1.2A @ 1.46 GHz | ASUS P2-99 | 256 MB PC133 SDRAM | GeForce3 Ti 200 64 MB | Voodoo2 12 MB | SBLive! | AWE64 | SBPro2 | GUS

Reply 29 of 48, by Deano

User metadata
Rank Newbie
Rank
Newbie

The reason games didn't use the FPU until Quake era was because FPU were much slower than using fixed point integer in almost all cases for games. Quake and Descent were the first to notice that whilst its divide was still quite slow it could occur in parallel to the normal integer ALUs.
In early 3D accelerated games we still didn't use floats that much because 3D cards didn't consume floats, triangle setup engines only appear in 2nd gen cards (Voodoo II and RIVIA for example).

Fundamentally outside of GUIs 2D games needed just 1 thing, fast blits with punch through (aka source colour keying). Many windows accelerators didn't have source colour keying as windows didn't use them, so the accelerators were actually pretty crap for games even when we did get a HW capable API. Same thing for VESA/AF, I had a path that used it back in the day, but honestly the amount of cards that it was useful (beyond linear buffers which we got with VESA 2.0) was minimal. Early 3D accelerators were notoriously bad at games because they often borrowed HW from the windows acceleration cores which had fundamentally different requirements. It was only once they started provided dedicated 3D cores aligned to what games wanted, did software finally go away.

Like 8087 which was designed for CAD etc. the existing Windows acceleration HW wasn't very useful for games.

When the PC HW people noticed games were getting big business, they started making the hardware that was actually useful.

Game dev since last century

Reply 30 of 48, by Gmlb256

User metadata
Rank l33t
Rank
l33t
och wrote on 2023-12-26, 13:59:

Understood, thank you. Reading through this thread, it seems that even the FPU had to be explicitly supported by software to be utilized?

Yes, even though the MMX instructions are only for integer operations, the registers are aliased with the x87 FPU ones.

VIA C3 Nehemiah 1.2A @ 1.46 GHz | ASUS P2-99 | 256 MB PC133 SDRAM | GeForce3 Ti 200 64 MB | Voodoo2 12 MB | SBLive! | AWE64 | SBPro2 | GUS

Reply 31 of 48, by Deano

User metadata
Rank Newbie
Rank
Newbie

Its also worth noting that good 'ld VGA does have some minimal acceleration features in it, the biggest reason they didn't get use more was that the ISA bus was so slow once we got to 386+ that it was faster to use a CPU local buffer and copy that up on vsync than actually work on the VGA side.

Game dev since last century

Reply 32 of 48, by Deano

User metadata
Rank Newbie
Rank
Newbie
Gmlb256 wrote on 2023-12-26, 13:47:

MMX is a general-purpose SIMD instruction set and software need to have explicit support for it. Some games such Rebel Moon Rising, Extreme Assault and POD used it for software rendering, but performance pales in comparison to real 3D hardware accelerators.

MMX is integer only, wasn't until SSE with the Pentium 3 that we got float SIMD. Which was useful for accelerating geometry transforms if you need high vertex throughput but most games didn't.

Game dev since last century

Reply 33 of 48, by dionb

User metadata
Rank l33t++
Rank
l33t++
och wrote on 2023-12-26, 13:59:

Understood, thank you. Reading through this thread, it seems that even the FPU had to be explicitly supported by software to be utilized?

Yes. Basically DOS doesn't do any kind of abstraction of this kind of functionality. All you have is what BIOS offers you, which is I/O related, not any kind of graphics or sound or whatever, or even additional CPU instruction sets.

So if your software wants to talk to any hardware, it has to directly address it in whatever way the hardware itself understands. That's why you have to configure specific sound hardware in DOS, and the same applies for any form of co-processor, be it an FPU or a graphics accelerator. So AutoCAD for example supported at least three different kinds of FPU (8087 FPU, Intel 287/387/487 FPU (and compatibles) and Weitek). as well as 8514 and XGA drawing acceleration. But it was an outlier, as AutoCAD was so specialized and expensive that people built computers around the software - and almost everyone doing so ensured they had the hardware to optimally utilize that software. Once again, all this support needed to be hard-coded into your application, so every developer would have had to do it themselves as well. Both Lotus 1-2-3 and AutoCAD supported FPUs, but they each had had to invent the wheel themselves, as did Spectrum Holobyte with Falcon 3.0 and Maxis with SimCity.

For games, that generally made no commercial sense: in the 1980s and early 1990s (up to 386 era), accelerated hardware was so rare and expensive almost no one developed for it. By the time of the 486, FPU was fairly common (but still too slow to be of use to many games, as detailed above) as was graphics acceleration hardware, but the latter market was completely fragmented with every vendor having their own unique hardware that was only compatible with others at the most basic (S)VGA/VESA level. So that is what developers coded for. At no point would the extra effort of supporting multiple chips natively translate into additional sales and so revenue.

It's no coincidence that utilization of co-processors, graphics acceleration and indeed more advanced CPU instructions only became commonplace once Windows had abstracted the hardware, so you just developed once for DirectX (or maybe OpenGL) and let Windows and its drivers figure out how to talk to the hardware.

Reply 34 of 48, by Deano

User metadata
Rank Newbie
Rank
Newbie

We all did write for all the SVGA different sets (or at least the top 5) but we had no info on acceleration (they never gave it to us) and so fell back to VESA for the uncommon ones. As for OpenGL or DirectX being write once for all cards not in the early days at the very least. Every card was so different we had special work for most, it actually got worse due to driver issues. The late 90s games were crazy complicated due to working around the dozens of video cards and crappy drivers. Silent Hill 2 PC (early 2000s) has about 5 rendering paths (sw TNL, hw TNL, vertex shaders, and also NoPS, PS1_1 and PS1_4 iirc) with a lot of driver/card fixes on top. OpenGL at the end ended up as so extension based you ended up with two paths for the two major vendors.

Trust me between 1995 and 2005 writing code the worked across a range of PC hardware was a lot of work. What really changed is that MS started with Dx9 telling IHV what they would have to support.

Game dev since last century

Reply 36 of 48, by mkarcher

User metadata
Rank l33t
Rank
l33t
Jo22 wrote on 2023-03-21, 02:30:

Edit: There's a demo scene production especially made for the early S3 accelerators.
It works in DOSBox, so I assume it's S3 Trio friendly, maybe S3 Vision friendly, too.
Not sure if it still works with S3 ViRGE.

As the Trio is basically the integration of three components (hence the name!), which are

  1. A S3 Vision 864-like graphics core
  2. A RAMDAC
  3. A clock synthesizer

anything that runs on a Trio should also run on a Vision chip. The Trio/V+ integrates the later Vision 868 core including video acceleration.

The 2D accelerator of the ViRGE is entirely different than the 2D accelerator of all earlier S3 chips. The earlier S3 chips are inspired by the IBM 8514/A video accelerator. The S3 ViRGE 2D acceleration engine is actually a special operation mode of the S3 ViRGE 3D acceleration engine, and thus programmed in a very different way. That demo scene production is not ViRGE compatible if it tries to use any kind of acceleration features - even if it is just the most basic bit blitting.

Reply 37 of 48, by rasz_pl

User metadata
Rank l33t
Rank
l33t
Gmlb256 wrote on 2023-12-26, 13:47:

MMX is a general-purpose SIMD instruction set and software need to have explicit support for it. Some games such Rebel Moon Rising, Extreme Assault and POD used it for software rendering, but performance pales in comparison to real 3D hardware accelerators.

None of those games used MMX for rendering. POD for example includes optional audio filter implemented with MMX in order to fulfill Intel terms of contract for $payout$. Intel was paying 1$Mil per game advertising MMX. POD box is plastered with "designed for MMX" slogans.
MMX is useless for 3D rendering or games in general. It was designed for fixed point DSP math, think software modems, video compression, audio effects.

Deano wrote on 2023-12-26, 14:10:

Quake and Descent were the first to notice that whilst its divide was still quite slow it could occur in parallel to the normal integer ALUs.

Descent 1/2 dont use FPU in software rendering mode.

Deano wrote on 2023-12-26, 14:10:

In early 3D accelerated games we still didn't use floats that much because 3D cards didn't consume floats, triangle setup engines only appear in 2nd gen cards (Voodoo II and RIVIA for example).

3D cards do consume floats, thats how you get subpixel precision. 3dfx cards accept either single precision floats or fixed point for tringles, color, alpha, and texture coordinates.
"Triangle setup" unit of Voodoo 2:
1 culls triangles facing away from camera.
2 allows drawing triangle strips and fans (less data to transfer because coordinates of previous triangle are being reused).
and has nothing to do with triangle data format.

Open Source AT&T Globalyst/NCR/FIC 486-GAC-2 proprietary Cache Module reproduction

Reply 38 of 48, by Jo22

User metadata
Rank l33t++
Rank
l33t++
och wrote on 2023-12-26, 05:50:

From what I understand, the PC was never designed with a dedicated graphic subsystem in mind, so until 3D accelerators there wasn't much development in the 2D acceleration. And modern GPUs are no longer even providing any sort of Windows GUI acceleration, which is a shame.
[..]

There was a popular chip by NEC/Intel that did feature some intelligence, the µPD7220.

For some reason though, IBM went with the Motorola 6845 for MDA and CGA.
Even back then, people weren't really happy with this decision.

The 6845 was intended as a basic text generator, rather. The cursor could blink on its own, at least.

https://www.computer.org/publications/tech-ne … -graphics-chips

https://en.wikipedia.org/wiki/NEC_%C2%B5PD7220

Deano wrote on 2023-12-26, 14:14:

Its also worth noting that good 'ld VGA does have some minimal acceleration features in it, the biggest reason they didn't get use more was that the ISA bus was so slow once we got to 386+ that it was faster to use a CPU local buffer and copy that up on vsync than actually work on the VGA side.

That's right, I believe. It can do a few things on its own.
Things like scrolling, moving blocks of pixels, XORing pixels etc. Hardware mouse cursor..

"Time, it seems, doesn't flow. For some it's fast, for some it's slow.
In what to one race is no time at all, another race can rise and fall..." - The Minstrel

//My video channel//

Reply 39 of 48, by och

User metadata
Rank Newbie
Rank
Newbie
Deano wrote on 2023-12-26, 14:14:

Its also worth noting that good 'ld VGA does have some minimal acceleration features in it, the biggest reason they didn't get use more was that the ISA bus was so slow once we got to 386+ that it was faster to use a CPU local buffer and copy that up on vsync than actually work on the VGA side.

So that is akin to some early 3d "decelerators" from S3, Matrox, etc - they were slower than software rendering, at least when faster CPUs were used, and introduced a bunch of graphic glitches.