VOGONS

Common searches


Reply 100 of 115, by SquallStrife

User metadata
Rank l33t
Rank
l33t
Scali wrote:

Do you also remember when they first moved to PPC?

As I recall, a fairly substantial contributor was that the PPC G5 was too hot and too power-hungry to go into a laptop.

I don't know that Intel's catch-up was all "brute force" either. Their transition away from Netburst-based P4's to Banias/Dothan/Conroe was a step back from brutal clock-speed ramping. SSE2 and SSE3 were serious competitors to AltiVec, taking away that advantage from PPC as well.

VogonsDrivers.com | Link | News Thread

Reply 101 of 115, by Scali

User metadata
Rank l33t
Rank
l33t
SquallStrife wrote:

So in lieu of a patch, the most practical and immediate way to enjoy the game "as it was intended to be" is on a period-correct system. And he's right.

Well, almost.
I would say "period-correct" is a not very accurate description.
Namely, it is certainly not guaranteed that at the time of the release, machines, OSes and drivers on the market were capable of running said game correctly (Rage leaps to mind, which needed various driver/game updates from vendors before it finally worked).
Conversely, there are a lot of components that simply don't fall inside this equation, because they are not the ones causing the issue.
The given example of ISA cards is a very poor one. I went from ISA to VLB to PCI to AGP and PCI-e, and that was never a problem. As I say, as long as your card was "VGA-compatible" or "SB-compatible" or whatever, it would work, and it did. And this was mostly in the DOS age, where games and demos actually DID talk directly to the hardware.
It's easier to hide/fix incompatibilities these days since everything goes through drivers now (such as the above example of DX9/DX10 cards, where the underlying hardware doesn't even look remotely like the previous generation, but because the driver sits between the hardware and the OS/applications, this simply doesn't matter).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 102 of 115, by PhilsComputerLab

User metadata
Rank l33t++
Rank
l33t++

Fair enough period correct is a term that could mean a lot.

Many of us know each other very well and we know that these terms are a bit flexible.

To illustrate how I approach a game that I have issues with on my Windows 8.1 machine and want to try out under XP I look up the release date. Then study the documentation. Are they mentioning specific graphics cards? Any known issues? Do they recommend a driver? Are there patches for the game?

Then I look at Nvidia (sorry I don't do ATI cards) and check drivers from when the game was released to roughly a year later. I look at the release notes and look for entries to the game. Anything fixed? Any performance boosts? Any known issues? The Splinter Cell games are mentioned frequently throughout later drivers.

So usually I end up with a driver thats 6 - 12 months newer than the game and that usually means hassle free gaming.

I haven't done the same for Sound Cards but this is more because of lack of drivers than anything and sound (Creative cards) always seem to work and I haven't run into a single issue, but that's under any OS. They also work just fine in W8.1 and even better with ALchemy.

Regarding ISA cards. Many people come to VOGONS and ask for advice for DOS Gaming PCs and the general consensus is for DOS to get an ISA card. We know about PCI options and all of that and they work for many games but not for all and it's often people coming here for advice on exactly those few games that do not work. For exotic configurations such as AWE or 3D accelerators there are no emulators / wrappers available and you have no choice but to play on an old PC or start an emulation / wrapper project yourself.

Last edited by PhilsComputerLab on 2015-01-07, 22:44. Edited 1 time in total.

YouTube, Facebook, Website

Reply 103 of 115, by Scali

User metadata
Rank l33t
Rank
l33t
SquallStrife wrote:

As I recall, a fairly substantial contributor was that the PPC G5 was too hot and too power-hungry to go into a laptop.

Yes, as I say, IBM was not interested in supporting the markets that Apple was active in.

SquallStrife wrote:

I don't know that Intel's catch-up was all "brute force" either. Their transition away from Netburst-based P4's to Banias/Dothan/Conroe was a step back from brutal clock-speed ramping.

I wasn't talking about the CPUs specifically, but rather Intel's entire operation. As I said, Intel spends billions of dollars on R&D, they have the most advanced fabs and all that. They're currently moving from 22 nm to 14 nm, while most of their competitors are still stuck at 28 nm.
Their CPUs don't have to be as elegant and efficient as the competition.

SquallStrife wrote:

SSE2 and SSE3 were serious competitors to AltiVec, taking away that advantage from PPC as well.

I'm sorry, but... 🤣!
SSE2 and SSE3 were a first step in the direction of floating point SIMD for Intel, but AltiVec was FAR more sophisticated and elegant.
In fact, even today AltiVec doesn't look all that bad. Intel added horizontal operations much later, in SSSE3 (2006), while AltiVec (1998) has always had those, for example.
AltiVec operations also have more operands, allowing more efficient register use and such.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 104 of 115, by SquallStrife

User metadata
Rank l33t
Rank
l33t

Oh I agree that they came far too late, I just meant they took away the "big advantage" that PPC once enjoyed.

The kind of advantage that gave weight to the old "Macs are better for creative stuff" trope.

VogonsDrivers.com | Link | News Thread

Reply 105 of 115, by Scali

User metadata
Rank l33t
Rank
l33t
SquallStrife wrote:

Oh I agree that they came far too late, I just meant they took away the "big advantage" that PPC once enjoyed.

Even without the advantage of AltiVec, PPC was a much better choice than x86 in the early days.
PPC was a modern 32-bit RISC architecture, very elegant, power-efficient, and capable of high clockspeeds. At the time it compared very favourably with the Intel Pentium... And then came the dramatic Pentium Pro and eventually Pentium II, which were the huge mountain that Intel had to climb.
Namely the Pentium Pro/II architecture was the first architecture with a RISC-backend. So Intel was trying to leave the limitations of CISC behind, by translating x86 code into RISC code on-the-fly. This allowed them to get near-RISC levels of performance and clockspeed scaling... But the cost of the on-chip x86 translation was very high in the early days (and in fact it is still one of the main things AMD is struggling with today... getting enough x86 instructions decoded to feed the backend).
It took many years for Intel to finally close that gap with the native RISC CPUs (by that time the RISC-CPUs were running into similar legacy issues themselves, with every more complex instructionsets, and having to remain compatible with 32-bit code, even though 64-bit was now the standard).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 106 of 115, by smeezekitty

User metadata
Rank Oldbie
Rank
Oldbie

What isn't? Even back in 1978 it was already quite an outdated architecture. Its direct competitor, the Motorola 68000 was far more modern and powerful (much closer to the 80386).
To make matters worse, the PC used the 8088 instead of the 8086. The 8-bit bus made the CPU completely bottlenecked, making this '16-bit' CPU at 4.77 MHz perform barely any faster than the good old 6502 at 1 MHz.

And over time they just kept extending it and extending it, making the instructionset a complete mess, and all these CPU modes it has to support etc.

In 1978, it may have been outdated but it has come a long way.

There were a number of major update milestones that helped it catch up with architecture.

286: Protected mode and better extended memory access
386: Practical protected mode and true 32 bits
486: Onboard cache and FPU and coupled pipeline
Pentium: Out of order execution
Pentium MMX: MMX
Pentium 3: SSE

Do you also remember when they first moved to PPC? At the time the PPC was far superior to any x86. They only moved to x86 becau […]
Show full quote

Do you also remember when they first moved to PPC?
At the time the PPC was far superior to any x86.
They only moved to x86 because over time, Intel just caught up with everything through brute force. Motorola was no longer interested in developing the PPC further. So Apple had to move to IBM, but they weren't really interested in making CPUs for the market that Apple wanted to target either. So Apple was basically stuck with the G5 at some point... no more CPU updates.
So, eventually they had to move to x86, because x86 had a huge marketshare, and Intel spends billions of dollars on CPU updates each year, and x86 had already overtaken the PPC anyway.

At the time yes. Intel used more than brute force to catch up (well P4 was brute force) but they re-engineered it to perform quite well per clock with Conroe
I quite like the PPC architecture for a couple reasons but there is really no reason to move away from x86 as it is now (even though many mobile devices use ARM)
My personal experience with ARM has been it seems slow

Reply 107 of 115, by maximus

User metadata
Rank Member
Rank
Member
Scali wrote:
No. The paradigm shift you mention was actually already in DX9. The Radeon 9700 did not have any fixed-function pipeline, nor di […]
Show full quote

No. The paradigm shift you mention was actually already in DX9. The Radeon 9700 did not have any fixed-function pipeline, nor did it have a dedicated integer pipeline. It did all shading, fixed-function, DX8.x integer, and DX9 on its floating point pipelines.

The paradigm shift in DX10 would be unified shaders.
Neither paradigm shift really has any effect on compatibility, since D3D is a hardware abstraction layer. The application doesn't 'see' the hardware, so whether there actually is a real fixed function pipeline, or whether the driver just pre-loads some vertex/pixelshaders that are equivalent to the given fixed function pipeline makes absolutely no difference.
Likewise, performing integer operations with a floating point unit, provided the mantissa has enough precision, makes no difference either. In fact, in the Pentium CPU Intel did just that: there is no integer circuitry for division. For a div/idiv instruction, the CPU will forward the operation to the FPU portion.

Maybe ATI had already done this voluntarily with the R300 architecture, but everything I've seen indicates that the change officially happened with DirectX 10:

"Direct3D10 finally completes the break from the legacy fixed-function pipeline. Developers will use the programmable pipeline to emulate the older, fixed-function steps." (source)

"Direct3D 10 no longer supports the fixed-function transform and lighting pipeline." (source)

Maybe this is the difference: DirectX 9 hardware must support the fixed function pipeline, though not necessarily in hardware. DirectX 10 hardware is only required to support DirectX 10, and DirectX 9 emulation is done entirely in software as a courtesy.

PCGames9505

Reply 108 of 115, by Stiletto

User metadata
Rank l33t++
Rank
l33t++

So, we're all good here, Phil and Scali, yes?

Although he'd been lurking for a while, I invited Scali here to VOGONS because I follow his blog when I can and he often touches on things we talk about, like 3D programming for retro 3D cards, software 3D renderers for demos (he's a demoscene guy). Lots of demoscene people have passed through these forums here, with normal gamers sometimes profiting from whatever sparks their interest, including Trixter.

I didn't much care for the "you don't understand what we're about here, please consider going elsewhere" comments. Up until the last several years, we would have told actual hardware collectors the same thing. Forums grow and change, but the original intention of the forum was to discuss the hacks, shims, wrappers and setting changes for game compatibility purposes in Windows. Then we got into emulators (well almost from the start) and later actually into hardware collecting.

Who knows, Scali may indeed possess the skills needed to reverse engineer why Splinter Cell has such issues on modern drivers and come up with some DLL replacement, and you scared him off. 😜

"I see a little silhouette-o of a man, Scaramouche, Scaramouche, will you
do the Fandango!" - Queen

Stiletto

Reply 109 of 115, by Scali

User metadata
Rank l33t
Rank
l33t
smeezekitty wrote:

I quite like the PPC architecture for a couple reasons but there is really no reason to move away from x86 as it is now (even though many mobile devices use ARM)

I agree. x86 has left all alternatives in the dust over the years, and it will likely take the place of ARM as well, as Intel has been pushing for lower power consumption in recent years, and has brought out-of-order execution to Atom now.
Intel's 14 nm advantage will be very interesting for low-power devices.

But that still doesn't make x86 a good or pretty architecture. I would think that 68000, PPC, ARM or whatever other architecture would have had better results if it had received the same amount of investments in R&D, because you simply start from a better basis.

The other day a friend of mine did a compile-test of the linux base for various architectures. The differences in code-size were quite amazing. The smallest code was around 112 MB, where 68k and most RISCs were around 120-130. 32-bit x86 was 160 MB, and 64-bit x64 was a whopping 210 MB. That's how inelegant x86 is, with its variable-length instructions, that keep getting larger as they add new features. And that's why instruction-decoding is such a bottleneck. A bottleneck that AMD can't seem to control anymore, only Intel can.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 110 of 115, by Scali

User metadata
Rank l33t
Rank
l33t
maximus wrote:

Maybe ATI had already done this voluntarily with the R300 architecture, but everything I've seen indicates that the change officially happened with DirectX 10:

"Direct3D10 finally completes the break from the legacy fixed-function pipeline. Developers will use the programmable pipeline to emulate the older, fixed-function steps." (source)

"Direct3D 10 no longer supports the fixed-function transform and lighting pipeline." (source)

They are talking about the API-side, not the hardware-side.

maximus wrote:

Maybe this is the difference: DirectX 9 hardware must support the fixed function pipeline, though not necessarily in hardware. DirectX 10 hardware is only required to support DirectX 10, and DirectX 9 emulation is done entirely in software as a courtesy.

Well, almost.
Yes, DirectX 9 drivers (not hardware) have to support the fixed function pipeline. This is because DirectX 9 has support for legacy hardware/drivers as well. I have an overview on my blog: https://scalibq.wordpress.com/2012/12/07/dire … -compatibility/
As you can see, DirectX 9 requires a DDI7-driver. Which means that any DirectX 7-class or higher hardware is supported by the API. Since programmable shaders weren't introduced until DirectX 8, this implies that DirectX 9 must support fixed function as well.
But it is up to the driver how this is implemented.

DirectX 10 has no fixed function pipeline in the API anymore, and therefore your hardware MUST support shaders (and they made the requirement SM4.0 as well, so you have to have full DX10-compliant hardware). Even if your hardware had a fixed-function pipeline on board, you can't use it via DirectX 10. But by the time DX10 was introduced, all vendors had long replaced their fixed-function pipelines with shaders anyway.
The funny thing as you can see is that DX11 requires only a DDI9 driver, and as such it can run on DX9-class hardware with SM2.0. This is mainly for Windows Phone support, where most SoCs don't support DX10+ level yet.

Each DirectX version is a standalone API (unlike OpenGL, where there is a single API with different versions). So DirectX 10 does not have anything to do with DirectX 9 whatsoever.
DirectX 9 and earlier APIs all live together, each interface in their own COM-like objects.

"DirectX 9 emulation" is not the right term if you ask me. DirectX does not specify how hardware should implement functionality, and there are often huge differences between hardware from different vendors and/or generations.
You are not 'emulating' anything, you are implementing a spec. So you can't say that there is "one true implementation" and that any other implementation is an "emulation" of the real implementation.
It's also not done "entirely in software", since that would imply software-rendering, which would be far too slow. It's just that older APIs had fixed-function processing, which is now pre-programmed by the driver on programmable hardware. It's still done in hardware, and it's often actually faster than a fixed-function solution, since it is more flexible.

I think it is similar to the x86 instructionset. To be x86-compatible, you need to implement the instructionset. But it doesn't matter HOW you implement it. There have been dozens of different x86 implementations over the years, but you wouldn't call them 'emulators of the 8086', let alone that they 'emulate in software'.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 112 of 115, by maximus

User metadata
Rank Member
Rank
Member
Scali wrote:
Yes, DirectX 9 drivers (not hardware) have to support the fixed function pipeline. This is because DirectX 9 has support for leg […]
Show full quote

Yes, DirectX 9 drivers (not hardware) have to support the fixed function pipeline. This is because DirectX 9 has support for legacy hardware/drivers as well. I have an overview on my blog: https://scalibq.wordpress.com/2012/12/07/dire … -compatibility/
As you can see, DirectX 9 requires a DDI7-driver. Which means that any DirectX 7-class or higher hardware is supported by the API. Since programmable shaders weren't introduced until DirectX 8, this implies that DirectX 9 must support fixed function as well.
But it is up to the driver how this is implemented.

DirectX 10 has no fixed function pipeline in the API anymore, and therefore your hardware MUST support shaders (and they made the requirement SM4.0 as well, so you have to have full DX10-compliant hardware). Even if your hardware had a fixed-function pipeline on board, you can't use it via DirectX 10. But by the time DX10 was introduced, all vendors had long replaced their fixed-function pipelines with shaders anyway.
The funny thing as you can see is that DX11 requires only a DDI9 driver, and as such it can run on DX9-class hardware with SM2.0. This is mainly for Windows Phone support, where most SoCs don't support DX10+ level yet.

Thank you for clarifying. This is precisely the kind of technical knowledge that can benefit the Vogons community. Welcome aboard 😀

PCGames9505

Reply 113 of 115, by m1so

User metadata
Rank Member
Rank
Member
King_Corduroy wrote:

I agree with this, but wanted to point out that Doom 3 on a Voodoo is probably a bad example, because people actually have gotten that to work on Voodoo2 (but maybe you were referencing that ironically). 😊 But I do completely get what you're saying, and agree with the general argument - there's absolutely been a "slowing" of performance growth, and aside from newer chips offering newer features or better power management, there's not a whole lot of incentive to upgrade as performance gains aren't generally linear or geometric as they once were.

I also will agree with not understanding why P4s get denigrated so much. I've never had an issue with NetBurst though - I've happily owned them since 2001, and don't have too many complaints. 😀

Well, I've actually seen Doom 3 on a Voodoo 2 😊 but as far as I know it was on a 2 Ghz+ CPU. A 1998 CPU like a Pentium II or K6-2 would run it perhaps 1 frame/hour. If this is how 2001 era GTA3 runs on a 500 Mhz K6-2 https://www.youtube.com/watch?v=vhHWDl2M0K8 with a 1999 Rage 128 Pro, I don't want to see how Doom 3 would "run".

obobskivich wrote:

P4 isn't really any more or less heat than anything modern - my C2Q is rated at 95W TDP, my i5 and C2D are both 65W (and I explicitly chose an "S" variant for the 65W TDP; the "normal" and "K" variants are more like 85-95W). By comparison my NetBurst chips are 74W, 72W, and 92W. All of the various heatsinks that I have for them are relatively similar in terms of capabilities as a result, the mounting is the only thing that differs significantly. The myth of the Pentium 4/NetBurst as "China syndrome in a box" really needs to die imho. 😊 😵

Agreed. The PCs we've had except for my totally quiet Lenovo B590 laptop were all space heaters. Fast but loud and hot. My current i7 875k desktop is just as loud as our 3.2 Ghz Northwood was.

obobskivich wrote:
smeezekitty wrote:

I have yet to find a game that is from the XP era, that doesn't work.

ORB, Empire Earth, Morrowind, Dark Forces II, Diggles, unpatched/unmodified Tiberian Sun all come to mind. 😊

Morrowind? Empire Earth? These 2 run on my 8 threaded Windows 7 system happily at native resolution (except I have to disable hardware TnL in Empire Earth).
San Andreas does as well. Just because the games don't run on your particular computer doesn't mean that that is true for all Windows 7 machines. I never had to switch off cores in those games, but you can set core affinity to 1 or 2 cores in the task manager.

Reply 114 of 115, by idspispopd

User metadata
Rank Oldbie
Rank
Oldbie
obobskivich wrote:
philscomputerlab wrote:

Ok fair point. It seems I have mostly Asrock and Asus boards. So these seem to be fine. Especially AMD was so into core unlocking and many boards had options for that.

My newest i5 is an ASRock and I looked through the manual trying to find mention of this feature and it isn't there that I could see. 😊

I'm not at all surprised that my Intel and Dell boards don't offer the feature though. 🤣

Since you are mentioning Dell: I saw this option in some E-series Latitude models. (Turn HT on/off and set 1/2/4 CPU cores.)

Reply 115 of 115, by mr_bigmouth_502

User metadata
Rank Oldbie
Rank
Oldbie
m1so wrote:
Well, I've actually seen Doom 3 on a Voodoo 2 :blush: but as far as I know it was on a 2 Ghz+ CPU. A 1998 CPU like a Pentium II […]
Show full quote
King_Corduroy wrote:

I agree with this, but wanted to point out that Doom 3 on a Voodoo is probably a bad example, because people actually have gotten that to work on Voodoo2 (but maybe you were referencing that ironically). 😊 But I do completely get what you're saying, and agree with the general argument - there's absolutely been a "slowing" of performance growth, and aside from newer chips offering newer features or better power management, there's not a whole lot of incentive to upgrade as performance gains aren't generally linear or geometric as they once were.

I also will agree with not understanding why P4s get denigrated so much. I've never had an issue with NetBurst though - I've happily owned them since 2001, and don't have too many complaints. 😀

Well, I've actually seen Doom 3 on a Voodoo 2 😊 but as far as I know it was on a 2 Ghz+ CPU. A 1998 CPU like a Pentium II or K6-2 would run it perhaps 1 frame/hour. If this is how 2001 era GTA3 runs on a 500 Mhz K6-2 https://www.youtube.com/watch?v=vhHWDl2M0K8 with a 1999 Rage 128 Pro, I don't want to see how Doom 3 would "run".

obobskivich wrote:

P4 isn't really any more or less heat than anything modern - my C2Q is rated at 95W TDP, my i5 and C2D are both 65W (and I explicitly chose an "S" variant for the 65W TDP; the "normal" and "K" variants are more like 85-95W). By comparison my NetBurst chips are 74W, 72W, and 92W. All of the various heatsinks that I have for them are relatively similar in terms of capabilities as a result, the mounting is the only thing that differs significantly. The myth of the Pentium 4/NetBurst as "China syndrome in a box" really needs to die imho. 😊 😵

Agreed. The PCs we've had except for my totally quiet Lenovo B590 laptop were all space heaters. Fast but loud and hot. My current i7 875k desktop is just as loud as our 3.2 Ghz Northwood was.

obobskivich wrote:
smeezekitty wrote:

I have yet to find a game that is from the XP era, that doesn't work.

ORB, Empire Earth, Morrowind, Dark Forces II, Diggles, unpatched/unmodified Tiberian Sun all come to mind. 😊

Morrowind? Empire Earth? These 2 run on my 8 threaded Windows 7 system happily at native resolution (except I have to disable hardware TnL in Empire Earth).
San Andreas does as well. Just because the games don't run on your particular computer doesn't mean that that is true for all Windows 7 machines. I never had to switch off cores in those games, but you can set core affinity to 1 or 2 cores in the task manager.

That video of a K6-2 running GTA III is painful to watch. 🤣