VOGONS

Common searches


First post, by m1so

User metadata
Rank Member
Rank
Member

If the 3D card revolution never happened (unlikely I know), what kind of a CPU would games with modern graphics like today require, in a resolution like 1600x900? Something like dual 8 core Xeons? Or even more?

Reply 1 of 14, by DosFreak

User metadata
Rank l33t++
Rank
l33t++

You can use Swiftshader or Mesa3D to run the latest games in software on your CPU if you want to test.

The latest GPUs are so much more powerful than modern CPUs for these operations that the latest games are unplayable on even an i7 in software.

So yeah we'll definetly need a hell of a lot more cores than we have now.

Intel went the route of integrating their GPU into the CPU first but it's still in hardware and that does seem to be a possible future but even though Crysis 3 will run at 1280x it's stil with all graphics options set to low.

It'll be quite some time before we even have decent performance from combined cpu/gpu processors. (By decent I mean 720+ with full details)

So if the GPU had never came about it would be nice to think that other areas other than graphics would have progressed farther but I'm sure developers would still be worried about the purty graphics except in this alternate world the shiny graphics would be significantly behind ours....unless you think without the resources put behind GPU development that our CPU tech would have progressed further?

How To Ask Questions The Smart Way
Make your games work offline

Reply 2 of 14, by F2bnp

User metadata
Rank l33t
Rank
l33t
DosFreak wrote:

It'll be quite some time before we even have decent performance from combined cpu/gpu processors. (By decent I mean 720+ with full details)

Might be sooner than you think actually. Next APUs from AMD apparently will be much faster, with the fastest one approximating the speed of a 7750 or even 7770 (probably an exaggeration though on the 7770). I think we're on the right track. 😀

Reply 3 of 14, by leileilol

User metadata
Rank l33t++
Rank
l33t++

SSE2 is nice for handling bilinear filtering.

You might want to look into Darkplaces - it has a SOFTRAST driver which attempts to do the 'modern 3d graphics' with multi-threading and SSE2 so it does take advantage of the CPU. It could serve as a nice 'what if 3d cards never existed', and it is definitely faster than wrapping some DX9 game to some software renderer.

Also, Darkplaces does have DX9 output as well, so you could 'compare' it with swiftshader 😀

apsosig.png
long live PCem

Reply 4 of 14, by DosFreak

User metadata
Rank l33t++
Rank
l33t++

Yeah I'm interested to see what the AMD performance will look like with the new APU next year if only for when I buy a new laptop. 😁

http://www.anandtech.com/show/7106/amds-a1057 … g-performance/3

I can't imagine the new chip being a drastically significant increase

30fps avg at 1366 with medium details doesn't seem that bad until you consider minimum fps....

Meanwhile on my desktop I've been gaming at full details at 1920x1200 since 2007. Holy crap....i've been stuck at 1920 for that long! Need to upgrade NOW.

How To Ask Questions The Smart Way
Make your games work offline

Reply 7 of 14, by DosFreak

User metadata
Rank l33t++
Rank
l33t++

Yeah keep forgetting about WARP. If you want to try that then on Windows 8 you'll need to disable your video card then any D3D game will be software rendered.

It's very slow but faster than swiftshader. None of the newer games will be playable however.

http://www.istartedsomething.com/20081126/dir … -albeit-slowly/

How To Ask Questions The Smart Way
Make your games work offline

Reply 8 of 14, by Gemini000

User metadata
Rank l33t
Rank
l33t

The funny thing is, GPUs aren't really all that much more powerful than CPUs, (in fact, most run at slower speeds), but they have an entirely different architecture. CPUs nowadays are designed to run only a few processes at the same time, but to handle those processes as fast as possible and provide them with lots of cache space and functionality. GPUs are designed to run HUNDREDS of processes at a time, but each processes is limited as to what information it can access and the processes can't typically talk to each other either.

The idea is that when you render a scene on screen, the results of one pixel should not have an effect on the results of another pixel, so why waste time rendering each pixel in sequence when you could render several hundred in the exact same period of nanoseconds?

It's similar to how when console emulation was first getting started, a lot of emulators ran poorly despite how powerful computers were in comparison to the target consoles. This is because those consoles had an architecture based around sprites, pattern tables and palette settings, things computers don't typically handle at the hardware level, so those things had to be emulated too.

--- Kris Asick (Gemini)
--- Pixelmusement Website: www.pixelships.com
--- Ancient DOS Games Webshow: www.pixelships.com/adg

Reply 9 of 14, by m1so

User metadata
Rank Member
Rank
Member

What about something like Team Fortress 2, Trackmania Nations Forever or Prey in WARP10? OK, they are not the "newest" but they still use pixel shaders.

To be honest, I am not sure why WARP10 is supposed to be a "new" thing considering the MMX/RGB/RAMP Direct3D software renderers did the same thing for Directx 5 and 6 in the 90s.

Reply 10 of 14, by Davros

User metadata
Rank l33t
Rank
l33t
DosFreak wrote:

It's very slow but faster than swiftshader.

You sure swiftshader claim to be twice as fast in crysis

ps: prey wont work in warp or swiftshader its opengl

Guardian of the Sacred Five Terabyte's of Gaming Goodness

Reply 11 of 14, by m1so

User metadata
Rank Member
Rank
Member

I just tried Swiftshader on Trackmania Nations Forever and it surprised me, I expected much worse. 30.5 fps at absolutely everything turned down except for resolution (1600x900) is definitely worse than 85 fps with everything maxed out on my GTX 660 in 1600x900, but way better than I expected (especially considering the gluttony of historical flight simulator hi-res SW renderers than required CPUs 10-20x as powerful for a same framerate that a simple Pentium with a Voodoo 1 would provide for the accelerated version). It actually runs better than on my fathers Celeron laptop with a X3000 intel IGP, where benchmarks shows 25 fps or so in 800x600 (through it is an unfair comparision because my i7 875k at 2.9 Ghz (3.6 Ghz turbo boost) is much more powerful than a 2006 era Celeron).

What is funny is that I get exactly the same fps in 1024x768 with Swiftshader as in 1600x900.

Reply 12 of 14, by DosFreak

User metadata
Rank l33t++
Rank
l33t++

Last time I tested swiftshader extensively was a couple of years ago and I tested Windows 8 WARP10 much more recently. I also tested mostly in a VM....since I was testing D3D games on NT4. heh.

WARP10 seemed alot faster to me but I don't have benchmarks handy. It's possible the swiftshader FAQ was comparing to the WARP10 for Windows 7 which means it was running Crysis in D3D10 vs Swiftshader and Crysis in D3D9.

Older OGL games like Quake 3 will work under WARP10 due to the OGL->D3D emulator built into windows (Has to be enabled by using the ACT but newer OGL games probably not.

How To Ask Questions The Smart Way
Make your games work offline

Reply 13 of 14, by Gemini000

User metadata
Rank l33t
Rank
l33t
DosFreak wrote:

Older OGL games like Quake 3 will work under WARP10 due to the OGL->D3D emulator built into windows (Has to be enabled by using the ACT but newer OGL games probably not.

But then, Quake 3 works in Windows 8 without having to do that. ;)

I still find it ridiculous that Windows 8 supports really old OpenGL software perfectly fine but can't properly handle anything using DX8 or older for graphics. We REALLY need a DX->OGL wrapper for Windows 8 at some point, since I've tried FOUR existing wrappers after scouring the net and none of them function properly because Windows 8 intercepts the execution of pre-DX9 stuff before the wrappers can so it can shove it into DX11's terrible legacy support. >:(

--- Kris Asick (Gemini)
--- Pixelmusement Website: www.pixelships.com
--- Ancient DOS Games Webshow: www.pixelships.com/adg

Reply 14 of 14, by SquallStrife

User metadata
Rank l33t
Rank
l33t
Gemini000 wrote:

The funny thing is, GPUs aren't really all that much more powerful than CPUs, (in fact, most run at slower speeds), but they have an entirely different architecture. CPUs nowadays are designed to run only a few processes at the same time, but to handle those processes as fast as possible and provide them with lots of cache space and functionality. GPUs are designed to run HUNDREDS of processes at a time, but each processes is limited as to what information it can access and the processes can't typically talk to each other either.

In an extremely large nutshell yes.

There's a great article written by someone at Intel about how grandiose claims of GPU superiority aren't super accurate: http://pcl.intel-research.net/publications/isca319-lee.pdf

The thing that sinks GPUs is that conditionals destroy performance. There's a great example of this on Whirlpool, where a GPGPU programmer is trying to create a conditional without using "if": http://forums.whirlpool.net.au/archive/1646705

Meanwhile, Intel and AMD have gotten really good at branch prediction and out-of-order execution, so conditional branches can be pre-calculated or deferred. Of course this eats up valuable die space.

VogonsDrivers.com | Link | News Thread