ADDiCT you'd be amazed just how many different code paths exist within 10 year old 3D engine based systems.
You asked how I know MC1 contains ZERO x87 FPU instructions, while MC2 does, the x87 was not introduced on the Pentium series, so simply attempting to execute both binaries on a i486SX is the easiest way to check.
FYI: Intel never made a Pentium with the FPU disabled.
This was high performance coding best practice until languages like Java became popular.
eg: Code checks CPU feature flags, if given flags are present then different code paths execute at runtime.
There are many ways to implement this:
eg: Check for flags and store them in an array of variables, then check the variables when calling various functions, libraries, loops, etc and execute 'blocks' of code specifically designed to give better performance / detail / precision / quality / etc when given flags are present.
Quite often a library, esp CPU manufacturer supplied ones, will automatically do this. So often the programmer is unaware it is happening (in that scenario, although any skilled programmer / tester would know why a given bug tends to occur on one CPU family / revision and not on others).
Some would even go so far as to optimize for various CPUs, as two different CPUs with [mostly] matching feature flags may be internally different. (MMX implementations are one obvious example, but this concept predates MMX - the input and output are the same, but the process used and the speed between various implementations can, and do, differ).
eg: Compile a block of code for a given CPU family, feature-set, etc - then insert it as ASM to be called if given vars are flagged. Rinse, Wash & Repeat this many times to create some very high performance code. (This process can be automated BTW).
It is quite possible, likely in fact, that a dedicated programmer born when Peter Molyneux was, in: 1959 in the UK, would do this.
It is also quite likely 'manager' types with only a basic understanding of the architecture would dispute the requirement at times, and the testing... Oh, and it looks like Peter Molyneux had disagreements with his short-term employer - wow, what a coincidence. I wonder how many other experienced, dedicated programmers suffered a similar fate. (Hint: Thousands globally).
A more recent example is SSE2 implementation between say the Pentium 4 Northwood, P4 PreScott, and Core 2 Solo (I say solo because the 2nd core has nothing to do with the implementation being different).
Why do you need to be such a sceptic ?
================================================
Quote Source: http://www.mobygames.com/game/magic-carpet-2- … he-netherworlds the
Trivia
Magic Carpet 2 features several, what I think are, somewhat hidden ads for the Intel Pentium processor. In several of the regular levels you can find an entrance to bonus Demon Lord levels. While these Demon Lord are being loaded, briefly a screen with the Intel Inside logo and the message "Pentium processor detected, configuring for optimal performance" is shown. The funny thing is, you also get to see this message when your PC is equipped with an AMD processor.
================================================
Why would such a message appear for AMD CPU owners (note: The person making the claim in the Moby Games Triva did not provide specific a generation AMD CPUs, but it doesn't take a genius to figure out it would be the K6, K6-2, and maybe K5 series AMD processors.
Obviously Magic Carpet 2 looks at CPU features, not CPUID (the wiser of the two methods).
Now the text "...configuring for optimal performance..." - what does that imply ?
No, no, it is just an Advertisement and nothing else.
It is for these reasons Developers, Testers, and Software Support people ask for full system specs, sometimes crash dumps of code in memory, and at the very least a basic system overview when people indicate 'their PC' has a 'potentially unique' problem.
It is not that far fetched at all, and there is no need to be such a sceptic.
The point is, instead of having end users re-compile software, and needing to share 'secret' source code the end source code is more like a fat binary and contains pre-compiled / pre-optimized blocks of code for various CPU families.
If you had a 6th/7th generation processor, and say, the Linux source code, would you only compile it for i386 support and miss out on greater performance due to fear of a few new instructions, or a better/faster way of executing code on a given CPU family ?
- Of course not.
Sorry to 'rant', but it needed to be said.
[/b]