VOGONS

Common searches


Perception of speed

Topic actions

Reply 20 of 26, by leileilol

User metadata
Rank l33t++
Rank
l33t++

it wasn't until Q3 that id released something that did use SIMD (both MMX and SSE used)

What Quake does for the pentium are the assembly functions for the span driver to handle more pixels at once, and the lightmap building routine (which is always called whenever there's a dynamic light happening). This is as layman i could put it

The only Quake written before the Pentium have been the teasers, the D&D campaign and other internal id stuff.... etc 😀

apsosig.png
long live PCem

Reply 21 of 26, by Super_Relay

User metadata
Rank Newbie
Rank
Newbie
keenmaster486 wrote:

Amen. We need a DOSBox benchmark thread for non-x86 systems such as the Pi.
DOSBox used to run at little more than bare 8088 speed on my 1st gen Pi. Haven't tried it on the newer ones. There should be ways to optimize DOSBox for Pi, though.

dosbox on a properly optimised/overclocked pi3 compiled to take advantage of the NEON SIMD unit in the arm cpu will run doom1 timedemo3 at about 35fps

descent is playable but not as smooth

i would say as a retro PC a raspberry pi is a mid level 486.

lack of 2d acceleration really hurts the pi in both general web browsing and gaming and i have found that using the dispmanx driver versions of SDL is actually slower than writing to the frame buffer directly.

Reply 22 of 26, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie

@idspispopd
Re-reading, I can see how it looked like I made that claim. My bad, quotes on my part probably would have helped.

Scali wrote:

Actually, classic Pentium did not have any SIMD, and Quake does not make use of it.
Pentium MMX was the first x86 with SIMD extensions (Pentium Pro also did not have it, the Pentium II is basically a 'Pentium Pro MMX').
Quake is optimized for the pipelined x87 of the Pentium.

Good Stuff. Serves me right for attempting to expand on an already dead in the water joke.

Just had a look on wiki and I must say I didn't even know the Pentium was THAT old, or that Pentiums existed without MMX o.0. I didn't know anyone with a pentium until about '96, in which everyone was talking about MMX then so assumed MMX is what they brought to the table.

Just out of curiosity... Do you remember all this, or have to remind yourself? If its the former, I applaud your synapses.

Reply 23 of 26, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:

Just out of curiosity... Do you remember all this, or have to remind yourself? If its the former, I applaud your synapses.

I remember most of it myself... the 486/Pentium era was when I was maturing as a graphics programmer, and wrote various 3d renderers in assembly myself.
Learning every new architecture was very important at the time, to learn how to best design a renderer around its strengths and weaknesses.
When the Pentium MMX arrived it was quite a big thing in my world.
Up until the PIII/Athlon era more or less, I was still very much involved with microarchitectural optimizations, so I knew these CPUs up close and personal.
After that, things shifted to GPUs and shader programming instead, so I became more GPU-centered, and CPU-specifics became less important. On the other hand, the Core2 was more or less a derivative of the PIII, and Intel has been doing evolutionary steps since, so there wasn't all that much to learn about new CPUs anyway.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 24 of 26, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
I remember most of it myself... the 486/Pentium era was when I was maturing as a graphics programmer, and wrote various 3d rende […]
Show full quote
spiroyster wrote:

Just out of curiosity... Do you remember all this, or have to remind yourself? If its the former, I applaud your synapses.

I remember most of it myself... the 486/Pentium era was when I was maturing as a graphics programmer, and wrote various 3d renderers in assembly myself.
Learning every new architecture was very important at the time, to learn how to best design a renderer around its strengths and weaknesses.
When the Pentium MMX arrived it was quite a big thing in my world.
Up until the PIII/Athlon era more or less, I was still very much involved with microarchitectural optimizations, so I knew these CPUs up close and personal.
After that, things shifted to GPUs and shader programming instead, so I became more GPU-centered, and CPU-specifics became less important. On the other hand, the Core2 was more or less a derivative of the PIII, and Intel has been doing evolutionary steps since, so there wasn't all that much to learn about new CPUs anyway.

Have you heard of FlipCode by chance? That was my mecca as a budding young games developer, then Karma probably kicked in and I found my professional-self in the realm of CAD and have been there ever since S:

I read your blog towards the end of last year and it is a project that went into the big box of projects to attempt at some point in my life. I was a ardent Amiga user (not developer) for number of years (right up until about '96, when I got the Pentium which cost me an arm and a leg at the time), and later on in life, it was something that always wanted to return to. I've had the AMOS books, and a big pile of 'Storm' disks for years, but never found the enthusiasm for it until I read your Amiga blog (nice write btw). Then I read your rundown of 8088 and the box got bigger. Please stop.

I've been stuck IRL for the past few years, and in a vague attempt to try and relive nostalgia, decided to venture here. It has quickly occurred to me that things I remember may have happened +/- 3 years to when I remembered them S: Which is something I thought I would never be saying. </rant>

In the interests of mitigating thread derailment, and in relation to my Amiga experiences. I too, for years in fact, have wondered why I was only 20-30% less productive on a 7.5 Mhz 68K, than a Pentium at 10 times the speed (clock for clock). In regards to PC only platform, I think OS bloat certainly applies. This could be achieved by a number of indirect consequences; The use of language and compiler used, heavy use of libraries which themselves are complex due to the rich feature sets a lot have these days, among a few of the reasons. While the execution area of the OS may well be super optimised for the platform, this does not necessarily apply to the extended and dynamic resources used on an application basis. As an example, has anyone written a GUI with Win32, and then the same in Win Forms or even XAML to see differences (or even C# pinvoking Win32/Gdi stuff and comparing that)? I would be interested in the results of this. All 3 programs essentially achieve the same thing, with slight visual differences but quite a lot of requirement change under the hood.

Phrases like "Premature Optimisation is the root of all evil" and "Don't re-invent the wheel" have added to a development culture in which using off-the-shelf components for application development is the norm, with little consideration given to optimisation, in fact platform support is much more preferred and assembly is not portable in most cases, so the requirement for specific optimisation is diminished. I speak from a higher-level application point of view only though, this most certainly does not apply to games in which compatibility is much more defined, and perquisites higher. Compare Office95 to Office365 (Application), and something like Doom to which ever the latest release has been (A Game). 20 years of evolutionary development is much more apparent in the gaming world than in the application world. We have just got lazy as developers (imo, sorry to say, older developers had a much broader range of skills that could be applied across many disciplines in development. I've seen this diminish over the years as we tend to get developers who are specialised in certain fields. Given the lack of resources in smaller dev teams and an increasing size of products to maintain, we need 'jack of all trades' rather than 'masters'), and with the added power of hardware these days combined with (as OP suggests) the perception of how fast a task should be carried out perhaps leads to notion of inefficient computing (which probably is present, but rather hard to quantify).

This is all just my 2 cents. Perhaps also it is us, The user, that expects more because of our previous experience years ago of something taking "just as long to do" with ~20 years technological differences?

Reply 25 of 26, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:

Have you heard of FlipCode by chance? That was my mecca as a budding young games developer, then Karma probably kicked in and I found my professional-self in the realm of CAD and have been there ever since S:

Yup, I had a few 'Images of the day' posts there 😀
http://www.flipcode.com/archives/08-29-2002.shtml
http://www.flipcode.com/archives/09-11-2003.shtml
http://www.flipcode.com/archives/01-16-2004.shtml
http://www.flipcode.com/archives/09-19-2004_fire3d.shtml

spiroyster wrote:

(nice write btw).

Thanks!

spiroyster wrote:

Please stop.

No.

spiroyster wrote:

As an example, has anyone written a GUI with Win32, and then the same in Win Forms or even XAML to see differences (or even C# pinvoking Win32/Gdi stuff and comparing that)? I would be interested in the results of this. All 3 programs essentially achieve the same thing, with slight visual differences but quite a lot of requirement change under the hood.

I have actually... well, not very formally, but I have written my Direct3D and OpenGL engines in a componentized form in a DLL, and they can be hosted by anything that has a window-handle, basically.
I have written host programs for native C++, C# with WinForms and C# with WPF (XAML), and the results were quite consistent. I mean, the basic window handling overhead is negligible, and rendering performance was about equal. Perhaps if you'd go down heavy into the actual GUI stuff, you'd find that one type of GUI components can handle events quicker than another, but I think it's a non-issue for regular user-interaction.

spiroyster wrote:

We have just got lazy as developers (imo, sorry to say, older developers had a much broader range of skills that could be applied across many disciplines in development. I've seen this diminish over the years as we tend to get developers who are specialised in certain fields. Given the lack of resources in smaller dev teams and an increasing size of products to maintain, we need 'jack of all trades' rather than 'masters'), and with the added power of hardware these days combined with (as OP suggests) the perception of how fast a task should be carried out perhaps leads to notion of inefficient computing (which probably is present, but rather hard to quantify).

I think there is definitely some truth to that. I work at a rather small company as well, and I often have to fulfill the 'jack of all trades' role, while not being able to fully exploit my 'mastery' in specific areas.
I also see that sometimes projects are assigned to people who lack certain skills for that project, and come up with suboptimal solutions. But the only people who do have the skills, were already tied up in other projects.
The worst part is when developers lack certain skills, but aren't aware of this. I think this is a relatively new phenomenon. In the old days, software development was more low-level and hands-on anyway, so it would have been more obvious for a developer when his skills were inadequate for a certain task. Not to mention that the low processing resources of the machines at the time didn't give you much to hide behind. I mean, today, if you write a well-optimized GIF decoding routine, you may be able to decode a large image in 1 ms. A poor implementation may be 100x as slow, but still, at 100 ms per large GIF, in most use cases, you may not even notice it. It still feels 'instant' if you just want to display a handful of GIF images (eg displaying a web page).
In the old days it would be the difference between loading a GIF in 6 seconds, or 10 minutes. Very confronting. And you would also be more aware that there were other programs that could load that GIF in 6 seconds, so you must be doing something wrong.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 26 of 26, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
Yup, I had a few 'Images of the day' posts there :) http://www.flipcode.com/archives/08-29-2002.shtml http://www.flipcode.com/ar […]
Show full quote

Yup, I had a few 'Images of the day' posts there 😀
http://www.flipcode.com/archives/08-29-2002.shtml
http://www.flipcode.com/archives/09-11-2003.shtml
http://www.flipcode.com/archives/01-16-2004.shtml
http://www.flipcode.com/archives/09-19-2004_fire3d.shtml

Thought as much 😀. This was also the place the temped me to the dark side (raytracing), after seeing tbp and jbikker tutorials there. flipcode and ompf, I owe a lot of my professional career to.

Scali wrote:
spiroyster wrote:

As an example, has anyone written a GUI with Win32, and then the same in Win Forms or even XAML to see differences (or even C# pinvoking Win32/Gdi stuff and comparing that)? I would be interested in the results of this. All 3 programs essentially achieve the same thing, with slight visual differences but quite a lot of requirement change under the hood.

I have actually... well, not very formally, but I have written my Direct3D and OpenGL engines in a componentized form in a DLL, and they can be hosted by anything that has a window-handle, basically.
I have written host programs for native C++, C# with WinForms and C# with WPF (XAML), and the results were quite consistent. I mean, the basic window handling overhead is negligible, and rendering performance was about equal. Perhaps if you'd go down heavy into the actual GUI stuff, you'd find that one type of GUI components can handle events quicker than another, but I think it's a non-issue for regular user-interaction.

One day I may ask a question or pose a scenario you have not tested or experienced. 😀
I have recently done this too. I'm c++ till I die, but due to a new role, have found myself in the midst of cotton wool C# land (no disrespect to C#, I am beginging to love it, stuff can get done so quickly in it, once you submit to .NET). I probably wasted a few weeks researching details on WPF (like custom double buffering, and lots of stuff which is too windosey for my liking) just because I didn't want to have to track SharpGL in my repo. (Sad, I know). Once I realised it was all just a handle away, the upshot was the blissful ease of XAML while also being able to offload the entire busines logic (GL usage n'all) into my c++ homeland. Makes me wonder why anyone bothers, or why the usual answer on SO is "use sharpGL". Very rarely do I feel entirely comfortable with the structuring of a framework I have written, but I must say, I am a XAML convert. This might only be because XAML 'allows' this fast integration of its context with other subsystems, its all there in MSDN (but I can't help but think the don't make it easy for you as they are rather GL-phobic, and have been since ~1.2 as you probably know). Much heavier usage of the XAML framework, I would have though would put more strain on resources (which as you say may be in the order of ms, or ns) and when comparing would put Win32 in front. The XAML app I could write in little to no time, however the equivalent in Win32 would be a version of hell for me, so I won't be trying to replicate multi-window/dialog GUI's to test performance any time soon.

Scali wrote:
I think there is definitely some truth to that. I work at a rather small company as well, and I often have to fulfill the 'jack […]
Show full quote
spiroyster wrote:

We have just got lazy as developers (imo, sorry to say, older developers had a much broader range of skills that could be applied across many disciplines in development. I've seen this diminish over the years as we tend to get developers who are specialised in certain fields. Given the lack of resources in smaller dev teams and an increasing size of products to maintain, we need 'jack of all trades' rather than 'masters'), and with the added power of hardware these days combined with (as OP suggests) the perception of how fast a task should be carried out perhaps leads to notion of inefficient computing (which probably is present, but rather hard to quantify).

I think there is definitely some truth to that. I work at a rather small company as well, and I often have to fulfill the 'jack of all trades' role, while not being able to fully exploit my 'mastery' in specific areas.
I also see that sometimes projects are assigned to people who lack certain skills for that project, and come up with suboptimal solutions. But the only people who do have the skills, were already tied up in other projects.
The worst part is when developers lack certain skills, but aren't aware of this. I think this is a relatively new phenomenon. In the old days, software development was more low-level and hands-on anyway, so it would have been more obvious for a developer when his skills were inadequate for a certain task. Not to mention that the low processing resources of the machines at the time didn't give you much to hide behind. I mean, today, if you write a well-optimized GIF decoding routine, you may be able to decode a large image in 1 ms. A poor implementation may be 100x as slow, but still, at 100 ms per large GIF, in most use cases, you may not even notice it. It still feels 'instant' if you just want to display a handful of GIF images (eg displaying a web page).
In the old days it would be the difference between loading a GIF in 6 seconds, or 10 minutes. Very confronting. And you would also be more aware that there were other programs that could load that GIF in 6 seconds, so you must be doing something wrong.

Couldn't agree more, I feel we used to be more at one with our hardware. In our constant strive to perfect code output quantity with as little as possible interaction after authoring, means we have found ourselves detached from the whole process. Writing tools for the chain used to be part of the fun I thought, now CI must be paired with tried and tested systems named after butlers which makes me feel that extra bit removed from the whole cycle. This is also why I find blogs such as yours so interesting. If I had my way there would be at least 2 modules in a 3 year university course teaching and investigating vintage(er) hardware and architectures.

Here is an example of what I am talking about.
Note the blissful elegance of this snippet which reads in a text file and results in a string. Looks nice and pure and very STLatus quo.

std::ifstream t("file.txt");
std::string str((std::istreambuf_iterator<char>(t)),
std::istreambuf_iterator<char>());

Anyone with that extra bit of understanding about the mechanisms of whats going on will tell you this is a bad idea. Its this kind of replication for readability and wat-not which has allowed us to create for more complex programs, however we have sacrificed common sense when it comes to efficiency and relied on the improvement of hardware to stop us needing to go back and improve what we should, where it matters.