VOGONS

Common searches


First post, by c64z80

User metadata
Rank Newbie
Rank
Newbie

In a lot of old windows games up to around Directx8 I think, there was an option where you could select software rendering if your video card was somehow not up to the job. That option in settings was as much a staple of old games as finding an adobe reader installer bundled on the CD 😜

If I am not wrong, as I do not play much modern PC games, it seems this option is no longer available or has changed somehow.

So, what happened to it? Did it just fall by the wayside? Do games now have some sort of hybrid system where the less your video card is up to the job the more the CPU takes on, and the option is just not needed? Or are games today too powerful for just the CPU to manage things on its own, so the option is no longer needed?

Reply 1 of 14, by leileilol

User metadata
Rank l33t++
Rank
l33t++

Everything got 3d, and programming a software renderer that kept up with the evolution of realtime hardware 3d graphics would be a huge burden of human resources, especially since Abrash was much harder to get a hold of and it's not very trivial to find software rendering assembly geniuses in the games industry. Source was going to have a software renderer once. UE2 had a software renderer for UT2004 done later on with a Pixomatic driver (and it still didn't look right), etc.

The turning point is usually id tech3 on Q3 since it dropped software completely as it wouldn't be feasible with the multi-uv unaligned texture stages for every shader etc. unlike the 16-texel aligned lightmaps and leafs systems of the earlier idtech2

which makes this super impressive to me

There was also a 'ew blocky lego pixel's!!!' stigma in the late 90s that contributed to the fall of the software renderer, and we're full circle now, coming back in a post-Minecraft unity sturgeonslaw "retro" age where these blocky graphics require modern hardware to work and it's done for misinformed nostalgia dorking, only now this time with no idea on the design principles they're supposedly homaging, ending up as half-assed throwbacks giving a limited appeal.

Last edited by leileilol on 2016-09-25, 21:40. Edited 1 time in total.

apsosig.png
long live PCem

Reply 3 of 14, by eL_PuSHeR

User metadata
Rank l33t++
Rank
l33t++

I remember to have played some games using software renderer in the past: unreal, q2, hl

I was pretty impressed with Unreal. I didn't like the game that much but the software renderer wast top-notch.

Intel i7 5960X
Gigabye GA-X99-Gaming 5
8 GB DDR4 (2100)
8 GB GeForce GTX 1070 G1 Gaming (Gigabyte)

Reply 4 of 14, by Scali

User metadata
Rank l33t
Rank
l33t

Bottom line is that GPUs developed too quickly for CPUs and software rendering to keep up.
Initially, the same game could be played in software and with hardware acceleration. Eg Quake. The hardware acceleration would give you higher resolution, texture filtering, but other than that it was basically the same game.
But eventually GPUs became so advanced and pushed so many pixels and geometry that it was no longer feasible to render it on the CPU. You could try to render things in software (and there have been attempts such as SoftWire and Pix-o-matic), but it would not actually result in playable games.

There actually still is a software renderer for D3D10/11/12. It is called WARP (Windows Advanced Rasterization Platform). However, it is only meant for development purposes or casual games at best, because the performance is not good enough for serious gaming. It is a renderer developed by Microsoft, which is basically the reference of how D3D should be rendered. This is useful for developers, so they can rule out issues in their GPU or drivers. If it works on WARP, then the code is working correctly.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 5 of 14, by squareguy

User metadata
Rank Oldbie
Rank
Oldbie
leileilol wrote:

which makes this super impressive to me

Assembly language master..... on a 16-MHz CPU.... very impressive

Gateway 2000 Case and 200-Watt PSU
Intel SE440BX-2 Motherboard
Intel Pentium III 450 CPU
Micron 384MB SDRAM (3x128)
Compaq Voodoo3 3500 TV Graphics Card
Turtle Beach Santa Cruz Sound Card
Western Digital 7200-RPM, 8MB-Cache, 160GB Hard Drive
Windows 98 SE

Reply 6 of 14, by keropi

User metadata
Rank l33t++
Rank
l33t++

I am pretty sure this Q demo also uses the Motorola 56001 DSP chip that is present in all Falcons, this is not just optimized 030 code . Still a great job though

🎵 🎧 PCMIDI MPU , OrpheusII , Action Rewind , Megacard and 🎶GoldLib soundcard website

Reply 7 of 14, by c64z80

User metadata
Rank Newbie
Rank
Newbie

Thanks for the replies, I had always thought that GPU rendering was simply an added layer that was taken off with software rendering. It's cool to know that they were actually two different programs written nearly identically 😀

Reply 8 of 14, by Azarien

User metadata
Rank Oldbie
Rank
Oldbie

Do games now have some sort of hybrid system where the less your video card is up to the job the more the CPU takes on, and the option is just not needed?

No. In most modern games, if your card is not up to the job, you are "sorta out of luck". You may try running the game in lower resolution, turn off details etc. but even then some games may simply refuse to run if your card lacks required version of shaders.

Reply 9 of 14, by Scali

User metadata
Rank l33t
Rank
l33t

GPU and CPU rendering is pretty much mutually exclusive.
That is, the only logical point where you can make a division is with vertex processing. It is (or was rather) somewhat doable to let the CPU do vertex processing, and then push the data to the GPU for the actual triangle rasterization and per-pixel shading, texturing and whatnot.

But once you go into rasterization, there's no efficient way to let the CPU do 'part of the work'. The CPU doesn't have the same quick access to the textures, videomemory and whatnot, so in practice that is not going to work.
You either want to do everything on the CPU then, or everything on the GPU. The overhead of switching from one to the other would defeat the point.

In recent years, games have used such heavy geometry loads, that CPUs can no longer process this quick enough either, so vertex processing is no longer an option.

Which is why games have been 100% for some years now. They do allow you to scale complexity and detail up and down somewhat to cater for different speeds and capabilities of GPUs, but nothing that involves the CPU. Offloading to the CPU simply isn't an option anymore.
Even the slowest of integrated GPUs today would render orders of magnitude faster than the fastest CPU with the most efficient software renderer. There is just so much more parallelism in GPUs, even low-end ones, and then there are all sorts of specialized circuits, such as texture prefetching, filtering etc, that make GPUs so much more efficient than CPUs at rendering tasks (and various other tasks at that).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 10 of 14, by Scali

User metadata
Rank l33t
Rank
l33t
keropi wrote:

I am pretty sure this Q demo also uses the Motorola 56001 DSP chip that is present in all Falcons, this is not just optimized 030 code . Still a great job though

Yup... without the DSP, you'd need a bit more oomph.
With an 060 at ~60 MHz you can do stuff like this: https://youtu.be/8LzXy-cRCnQ

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 11 of 14, by huckleberrypie

User metadata
Rank Newbie
Rank
Newbie
Scali wrote:
Bottom line is that GPUs developed too quickly for CPUs and software rendering to keep up. Initially, the same game could be pl […]
Show full quote

Bottom line is that GPUs developed too quickly for CPUs and software rendering to keep up.
Initially, the same game could be played in software and with hardware acceleration. Eg Quake. The hardware acceleration would give you higher resolution, texture filtering, but other than that it was basically the same game.
But eventually GPUs became so advanced and pushed so many pixels and geometry that it was no longer feasible to render it on the CPU. You could try to render things in software (and there have been attempts such as SoftWire and Pix-o-matic), but it would not actually result in playable games.

There actually still is a software renderer for D3D10/11/12. It is called WARP (Windows Advanced Rasterization Platform). However, it is only meant for development purposes or casual games at best, because the performance is not good enough for serious gaming. It is a renderer developed by Microsoft, which is basically the reference of how D3D should be rendered. This is useful for developers, so they can rule out issues in their GPU or drivers. If it works on WARP, then the code is working correctly.

Besides SwiftShader that is. I've seen people from developing countries attempting to run the likes of GTA IV on subpar hardware using that tool, and needless to say, it's of little value other than as a mere curiosity or for those with the patience of a Buddhist monk.

Reply 12 of 14, by Scali

User metadata
Rank l33t
Rank
l33t
huckleberrypie wrote:

Besides SwiftShader that is.

Ah yes, that's the one I meant, when I said SoftWire. I believe SoftWire was an earlier name for the project.
That's the name he used in his paper: http://lib.ugent.be/fulltxt/RUG01/001/312/059 … 010_0001_AC.pdf

I helped the guy out somewhat, back in the day... Explained him how cubemapping works, and did some testing for him with some D3D code to find and fix various bugs.
He was convinced that CPUs would close the gap with GPUs as extensions such as SSE would get more advanced. That never happened, obviously. In fact, we've moved towards GPUs getting integrated into every CPU, and the two working together. CPUs and GPUs just have very different design applications and design parameters, so I think both will be around for a long time. Getting them to work together efficiently is the future.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/