VOGONS


Reply 40 of 92, by appiah4

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:

The first time Intel was actively pushed because of competition was when the K7 came around. Which was way after this period.

Before the 386, the x86 was not exactly 'established' as the defacto home computing chip - it was facing competition from a lot of other chip makers. Motorola's 68K for example was really in the game until the 386DX. Hell, not even RISC vs CISC was a done deal at the time. So essentially, Intel was always with competition and actively pushed by it. Actually, the first period when Intel did not face any competition was following the Bulldozer fiasco. We all know what happened after that.

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 41 of 92, by Scali

User metadata
Rank l33t
Rank
l33t
appiah4 wrote:

Before the 386, the x86 was not exactly 'established' as the defacto home computing chip

Nobody said anything about home computing.

appiah4 wrote:

- it was facing competition from a lot of other chip makers.

Not at all. You needed an x86 CPU to make an IBM PC-compatible DOS clone.

appiah4 wrote:

So essentially, Intel was always with competition and actively pushed by it.

Actually, the fact that x86 was basically complete shit compared to all the competing architectures, illustrates all the more that x86 operated in a vacuum. The vacuum created by IBM. Nobody ever got fired for buying IBM. So, businesses bought x86 machines by the truckloads. There was no alternative. PC-compatible or bust.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 42 of 92, by appiah4

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:

Actually, the fact that x86 was basically complete shit compared to all the competing architectures, illustrates all the more that x86 operated in a vacuum. The vacuum created by IBM. Nobody ever got fired for buying IBM. So, businesses bought x86 machines by the truckloads. There was no alternative. PC-compatible or bust.

That is not my memory of the time at all, but I guess it could be a Europe vs US thing. IBM/intel symbiotic relationship and the establishment of x86 as an open OEM ecosystem (although not really intentional) is well documented and did go a long way towards eliminating competing architectures but that doesn't mean Intel did not compete against them. Macs and Amigas were fairly feasible home PC or workstation alternatives at the time. Nobody got fired for buying an IBM obviously, but not all jobs required one. Things like publishing, video editing, 3D rendering, MIDI, tracking etc. for example were nigh impossible on the x86 for a long time. Intel did not win automatically by being complacent. From 8086 to 80386, there was incredible development and the way x86 eventually brute forced itself above the competition is pretty interesting. Downplaying competition's role here is wrong IMO.

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 44 of 92, by Scali

User metadata
Rank l33t
Rank
l33t
appiah4 wrote:

That is not my memory of the time at all, but I guess it could be a Europe vs US thing. IBM/intel symbiotic relationship and the establishment of x86 as an open OEM ecosystem (although not really intentional) is well documented and did go a long way towards eliminating competing architectures but that doesn't mean Intel did not compete against them. Macs and Amigas were fairly feasible home PC or workstation alternatives at the time. Nobody got fired for buying an IBM obviously, but not all jobs required one. Things like publishing, video editing, 3D rendering, MIDI, tracking etc. for example were nigh impossible on the x86 for a long time. Intel did not win automatically by being complacent. From 8086 to 80386, there was incredible development and the way x86 eventually brute forced itself above the competition is pretty interesting. Downplaying competition's role here is wrong IMO.

Well, how do you explain this then?
The 8086 and the 68000 were released at roughly the same time, and the 68000 was actually considered by IBM as their CPU for the PC at one point.
Now, the 68000 is quite an advanced design, with an orthogonal instructionset, almost RISC-like, with 16 32-bit registers and 32-bit instructions, even a simple supervisor/protected mode.
The 8086 was a complete joke of a CPU compared to that. It took Intel until the 386 to get more or less feature parity with the 68000, and even then the x86 instructionset was more archaic, and you still only had 8 registers.
And it took them 7 years to get there!

If competition was directly on the merits of the CPU alone, Intel simply wouldn't have stood a chance against the 68000. And technically it didn't: 68000 basically went into everything. Into virtually all 16-bit personal computers, home computers, game consoles, arcade machines, workstations, you name it. Heck, even the web was invented on a NeXT workstation, which was powered by a 68k.
The only problem was: all these niche-machines combined were a smaller market than the PC market, so Motorola lost anyway.

And that's just the CPU. The rest of the PC platform was complete shit as well: graphics, audio... complete joke. Compare a Commodore Amiga from the 80s against an equally-priced PC (if one even exists), and tell me 'competition' is why PCs won out over Amiga...
PCs were hopelessly behind in terms of development, on every front.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 45 of 92, by Standard Def Steve

User metadata
Rank Oldbie
Rank
Oldbie

-The original Athlon easily beat the Katmai PIII clock for clock. Unfortunately, it was held back a bit by unreliable chipsets. But the processor itself? Awesome.

-Athlon was NOT a clone of P6.

-AMD's Duron consistently left Celeron in the dust. In fact, Duron was dangerously close to similarly-clocked 100MHz FSB PIIIs.

-Thunderbird was typically faster than Coppermine clock-for-clock, although by the time the two processors reached 1GHz, it was more of a tie. Thunderbird @ 1GHz was faster when a program called for raw x87 (FPU) speed or memory throughput. Coppermine @ 1GHz was faster if the program could make use of SSE. Of course, Thunderbird could clock much higher than 1GHz.

-The 1.4GHz PIII-S and 1.4GHz Thunderbird were both able to reach and sometimes exceed the performance of a 2GHz Willamette P4, unless you were using SSE2 or playing Quake III. Q3 was like, the ONE non-SSE2 program that Willamette ran really well.

-Athlon XP was THE processor to have when it was released in 2001. The AXP-1900+ (1.6GHz) ran circles around the 2GHz P4, and it had stable chipsets to boot! The P4 only really pulled ahead of the AXP when it received an 800MHz FSB.

-Athlon 64 was THE processor to have when it was released in 2003. It was a monster gaming CPU thanks to a very fast FPU, now with SSE2/3 support. For a while, SLI was only available on AMD processor based systems! The only apps that ran slightly faster on a P4 were the ones that could make GOOD use of hyper-threading.

-K8 based Opterons vs. Netburst based Xeons? Not even close. Opteron was vastly superior. The integrated memory controller helped big time with server workloads.

-If you thought Athlon 64 was giving Intel a rough time, well, the A64-X2 was even more of a bully! This time, Intel didn't have hyper-threading to give them the occasional win. Athlon 64 X2 was better than Pentium D in every way, period.

-Unfortunately, it ended there. Core 2 was, well, amazing. And it only got worse for AMD as Core 2 gained additional instruction sets, integrated memory controllers, and beefed-up SIMD units in the form of Core i series processors. Today, Ryzen definitely isn't the fastest processor available, but it does perform nearly as well as Broadwell-E with apps that don't make extensive use of AVX2, and for a much lower price.

94 MHz NEC VR4300 | SGI Reality CoPro | 8MB RDRAM | Each game gets its own SSD - nooice!

Reply 46 of 92, by firage

User metadata
Rank Oldbie
Rank
Oldbie

Basically, the Athlons never lost to their clock-for-clock rival Pentium III's overall, while their cost was significantly lower - the equally rated parts were practically in different market segments. Athlons weren't trailing behind in clocks either; their top end was ahead of Intel.

Aggregated average retail prices for the best widely available parts here in Finland (expensive), collected in May 2000 in USD:
Slot 1 Pentium III: 800MHz $838, 750MHz $690, 700MHz $526, 650MHz $397, 600MHz $295
Slot A Athlon: 900MHz $978, 850MHz $794, 800MHz $557, 750MHz $391, 700MHz $304

Clock for clock comparisons ended with the Pentium 4. With Intel, you continued to pay more for products that kept losing to AMD competition due to inefficient designs.

Core 2 was more aggressively priced than before, and the performance and efficiency were fantastic. AMD barely mounted competition.

My big-red-switch 486

Reply 47 of 92, by appiah4

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
Well, how do you explain this then? The 8086 and the 68000 were released at roughly the same time, and the 68000 was actually co […]
Show full quote
appiah4 wrote:

That is not my memory of the time at all, but I guess it could be a Europe vs US thing. IBM/intel symbiotic relationship and the establishment of x86 as an open OEM ecosystem (although not really intentional) is well documented and did go a long way towards eliminating competing architectures but that doesn't mean Intel did not compete against them. Macs and Amigas were fairly feasible home PC or workstation alternatives at the time. Nobody got fired for buying an IBM obviously, but not all jobs required one. Things like publishing, video editing, 3D rendering, MIDI, tracking etc. for example were nigh impossible on the x86 for a long time. Intel did not win automatically by being complacent. From 8086 to 80386, there was incredible development and the way x86 eventually brute forced itself above the competition is pretty interesting. Downplaying competition's role here is wrong IMO.

Well, how do you explain this then?
The 8086 and the 68000 were released at roughly the same time, and the 68000 was actually considered by IBM as their CPU for the PC at one point.
Now, the 68000 is quite an advanced design, with an orthogonal instructionset, almost RISC-like, with 16 32-bit registers and 32-bit instructions, even a simple supervisor/protected mode.
The 8086 was a complete joke of a CPU compared to that. It took Intel until the 386 to get more or less feature parity with the 68000, and even then the x86 instructionset was more archaic, and you still only had 8 registers.
And it took them 7 years to get there!

If competition was directly on the merits of the CPU alone, Intel simply wouldn't have stood a chance against the 68000. And technically it didn't: 68000 basically went into everything. Into virtually all 16-bit personal computers, home computers, game consoles, arcade machines, workstations, you name it. Heck, even the web was invented on a NeXT workstation, which was powered by a 68k.
The only problem was: all these niche-machines combined were a smaller market than the PC market, so Motorola lost anyway.

And that's just the CPU. The rest of the PC platform was complete shit as well: graphics, audio... complete joke. Compare a Commodore Amiga from the 80s against an equally-priced PC (if one even exists), and tell me 'competition' is why PCs won out over Amiga...
PCs were hopelessly behind in terms of development, on every front.

I may not be getting my point across and I have a feeling it's a language barrier (not a native speaker here) but I did not mean to imply that x86 won the PC war because it was better, I only implied that x86 developed into the state where it was certainly the most powerful chip in the market for PCs (386DX) because there was competition in the form of other architectures. It was not factually better, neither did it win entirely on its merits as a great CPU. That is another story entirely. But that it got to the 386DX in 7 years is because 68K (and later 020,040 etc.) as well as AMD/Cyrix pushed them there. Left to its own devices Intel could have merrily been selling 8086 CPUs just clocked higher, like they did with the Core line. We are probably not seeing eye to eye on the issue and I don't mean to argue further and detract the topic, but to reiterate the point: Intel did not win the PC war because it was better in direct competition, but it did eventually get CPUs out the gates that matched and bettered competitor offerings *because* there was competition that set a target. We don't necessarily know what would certainly have happened if they were left to their own devices in a monopoly market, but AMD banking money on multicore processing with the Bulldozer and the gains failing the materialize certainly did resulton increasingly ridiculous diminishing returns on their proposed upgrade path.

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 48 of 92, by Scali

User metadata
Rank l33t
Rank
l33t
appiah4 wrote:

I may not be getting my point across and I have a feeling it's a language barrier (not a native speaker here) but I did not mean to imply that x86 won the PC war because it was better, I only implied that x86 developed into the state where it was certainly the most powerful chip in the market for PCs (386DX) because there was competition in the form of other architectures.

My point is that 7 years is an eternity in the world of technology.
If there really was any competition, x86 would never have survived that long. x86 only survived because it basically didn't matter what other CPUs were doing.

appiah4 wrote:

as well as AMD/Cyrix pushed them there.

Nonsense. Look at the timeline.
AMD and Cyrix didn't get into play until years after the 386. When the 386 arrived, AMD was still merely a second-source supplier of 8088 and 286 CPUs, and wasn't any competition at all. Cyrix was not even second-source at all, I believe (they didn't actually have any x86 license at all, and required a third party with x86 license to manufacture their chips, which was IBM).

appiah4 wrote:

Left to its own devices Intel could have merrily been selling 8086 CPUs just clocked higher, like they did with the Core line.

That's nonsense. The market would saturate at some point, and only upgrade paths would allow for any sales.
This is also what happened in the x86 world. Intel actually DID sell 8088 CPUs until the early 90s. However, software like Windows and the advent of networking and servers required faster CPUs and more memory, hence 386. But although 386 was introduced in 1985, it didn't become mainstream until the early 90s, and even then, mostly in the cost-reduced 386SX form.

appiah4 wrote:

We don't necessarily know what would certainly have happened if they were left to their own devices in a monopoly market, but AMD banking money on multicore processing with the Bulldozer and the gains failing the materialize certainly did resulton increasingly ridiculous diminishing returns on their proposed upgrade path.

Except that isn't true. Intel still developed many new CPUs and technologies. The only thing is that they didn't become mainstream. But the huge mistake that you and most people make is that you don't bother to look at other product lines than just the mainstream Pentium and Core lines. There's a lot of Xeon CPUs (not in the least the Xeon Phi) where Intel continued to push technology. But as with the 386 before, it comes at a premium, and is mainly for server/workstation markets. There is no demand from the mainstream market for anything faster than a Core i7.
I'm still not convinced that AMD is actually changing this with Ryzen.
Worst-case, Intel just has to rebadge some lower-end Xeons to Core to make their 6/8-core CPUs mainstream. They're already there, have been for years, and they can easily outperform Ryzen. They just haven't been marketed to mainstream buyers yet, unlike Ryzen.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 49 of 92, by appiah4

User metadata
Rank l33t++
Rank
l33t++

I disagree with majority of this but I really don't enjoy discussing this with you when you are in this attitude, so it is what it is.

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 50 of 92, by amadeus777999

User metadata
Rank Oldbie
Rank
Oldbie

I guess that the bad decisions on both sides(architecture wise) are mostly based on marketing related pressure. I can't remember 100% but the "deep" pipeline/high clock approach of the P4 was a decision to totally "demolish" AMD on the MHz side and didn't originate from the development team.

Reply 51 of 92, by Scali

User metadata
Rank l33t
Rank
l33t
amadeus777999 wrote:

I can't remember 100% but the "deep" pipeline/high clock approach of the P4 was a decision to totally "demolish" AMD on the MHz side and didn't originate from the development team.

That is a common myth on the web yes. I don't think Intel is a company that would be as unprofessional as to work this way.
I think the development team genuinely thought they could pull it off.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 52 of 92, by dexvx

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
Well, how do you explain this then? The 8086 and the 68000 were released at roughly the same time, and the 68000 was actually co […]
Show full quote

Well, how do you explain this then?
The 8086 and the 68000 were released at roughly the same time, and the 68000 was actually considered by IBM as their CPU for the PC at one point.
Now, the 68000 is quite an advanced design, with an orthogonal instructionset, almost RISC-like, with 16 32-bit registers and 32-bit instructions, even a simple supervisor/protected mode.
The 8086 was a complete joke of a CPU compared to that. It took Intel until the 386 to get more or less feature parity with the 68000, and even then the x86 instructionset was more archaic, and you still only had 8 registers.
And it took them 7 years to get there!

Because features isn't everything? There's time to market, marketing, business execution, and all the other 'non-engineering' work.

If we were to base it solely on engineering, 3DLabs would be dominating the current GPU market. It took several years for Nvidia/ATI to develop a chip that matched the GLINT in terms of features.

Reply 53 of 92, by Scali

User metadata
Rank l33t
Rank
l33t
dexvx wrote:

Because features isn't everything? There's time to market, marketing, business execution, and all the other 'non-engineering' work.

That's not the point. Any way you slice it, x86 loses.
Which explains why the IBM PC was the only major platform to use the x86. Literally nothing else ever used the chips. Competing computers used 68000, 6502, Z80, or various other chips, but NOBODY ever chose the x86 outside of IBM and their clones.
The x86 had basically nothing going for it. The reason why IBM chose it, is a bit of a fluke: they had built microcomputers on Intel's previous 8-bit platform before, and could re-use parts if they used the 8088, which was compatible with the chipsets for the 8085 (which they were using for the Datamaster they were developing: https://en.wikipedia.org/wiki/IBM_System/23). Which were quite poor chipsets to boot. Result: IBM PC was a massively overpriced, massively underpowered platform compared even to many 8-bit platforms, and no match for the 68000 platforms of the day (except in price).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 54 of 92, by dexvx

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

Which were quite poor chipsets to boot. Result: IBM PC was a massively overpriced, massively underpowered platform compared even to many 8-bit platforms, and no match for the 68000 platforms of the day (except in price).

I think you just contradicted yourself here. The IBM PC overpriced, yet they undercut the competition? From what I read, Intel gave IBM a good price on the 8088 along with guaranteed volume, something that Motorola could not do. IBM margins would be higher on every system sold, so they went with that.

How many average consumers (outside of enthusiasts) actually care what is inside their product? They just want a complete product that works. Apple's iPhone and iPad (and Samsung Galaxy S) sure as heck don't use the best individual components, but that doesn't stop them from being a sales juggernaut. And from another consumer standpoint, tons of people in budget/mid-end BMW's actually think they have a sporty car.

Reply 55 of 92, by Scali

User metadata
Rank l33t
Rank
l33t
dexvx wrote:

I think you just contradicted yourself here. The IBM PC overpriced, yet they undercut the competition?

I didn't say they undercut anyone.
I said they matched the price levels of 68000-based machines. Which was a bit of a sarcastic joke, since 68000 machines were 16-bit, where 8088 machines were 8-bit. So the 8088 should have been cheaper, priced more like the 6502 and Z80 machines that were more of a direct match in performance. But instead they were way more expensive. Priced like an 68000 machine, but nowhere near the performance or capabilities. Back when I got my first 8088 machine somewhere in 1988 (Hercules monochrome, two floppy drives, no HDD, 640k memory), you could buy two Amiga 500 machines for the same price. The Amiga was a much more advanced machine obviously.

dexvx wrote:

From what I read, Intel gave IBM a good price on the 8088 along with guaranteed volume, something that Motorola could not do. IBM margins would be higher on every system sold, so they went with that.

From what I read, the price difference on the CPUs themselves would be negligible. The difference was the fact that the 8088 had cheap 8-bit chipsets and motherboards, where at the time, 68000 chipsets weren't available off-the-shelf (Apple, Atari, Commodore etc developed their own chipsets).
IBM needed the PC asap, so they went for the fastest and cheapest option: off-the-shelf 8-bit components.

dexvx wrote:

How many average consumers (outside of enthusiasts) actually care what is inside their product? They just want a complete product that works.

Back then, the CPU was very VERY important. OSes and software were tied directly to the CPU and the rest of the hardware.
In fact, IBM already took a gamble by going with the 8088 instead of a Z80, because the Z80 was a popular option with CP/M, which was the 'standard' microcomputer OS of the day.
Which is where DOS came in: IBM originally wanted an 8086-port of CP/M, but the negotiations with Digital didn't go very well.
So IBM asked Microsoft for an alternative. Microsoft came up with DOS, which is basically a CP/M clone.
Digital also delivered the 8086-port of CP/M, and IBM offered it as an alternative. It came at a big price premium though, so DOS quickly became the standard OS for the IBM PC.
At which point, the fate of 8086 was cemented as well: Clone builders needed to use 8088 CPUs as well, and also clone the hardware and BIOS accurately in order to run all the software for the IBM PC.
So yes, consumers cared very much about what was inside their product, they wanted either a real IBM, or a 100% compatible clone. Which excluded all other CPUs by default.
That's why Intel could get away with selling sub-par CPUs and chipsets for so long.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 56 of 92, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++

You should probably be more clear here. By Digital you surely mean Digital Research and/or Gary Kildall and not Digital Equipment.

All hail the Great Capacitor Brand Finder

Reply 57 of 92, by kanecvr

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
Bobolaf wrote:

Very true both the K7 and K8 were something really impressive when they came out. I can only imagine the panic at Intel when they realise AMD had actually bettered them.

I don't think K7 was when Intel realized their mistake. K7 was only marginally better than the PIII (in fact, it is mostly a clone of that architecture, with a few tweaks and improvements).

P3 and the Athlon have nothing in common architecturally - and performance - wise, the Athlon is faster. FPU is up to 2x faster on the athlon when compared to an identically clocked tualatin.

Also, the P6 architecture scales very well - the Centrino / Core series is proof of that. When they finally figured out netburst was not a good direction, Intel redesigned the P3 into the Centrino platform and later the Core and Core 2 CPUs.

Bobolaf wrote:

AMD often priced lower end chips quite aggressively so you could get performance for little cash but in general was normally playing catch-up in absolute performance . The whole PR system was an embarrassment though as AMD tended to be very optimistic with these numbers. The early K7 range was probably one of there most competitive high end CPU AMD had done and genuinely a very powerful chip. AMD did drag out the K7 family for far to long. The P4 2.4c was substantially better at many pro apps and could do dramatically more SETI work units a day than even a top of the line Athlon XP 3200+ . The early 64bit chips again saw AMD back at the high end of things with the very impressive K8. The later FX chips with shared FPU always struggled. These new Zen chips are quite impressive too fingers crossed AMD will launch some higher clocked models soon .

That is because the software was developed specifically (heavily optimized) for netburst chips. The whole thing was more of a publicity stunt then anything else. Intel also aggressively pushed SSE2 and SSE3 because the chips relied on extended instructions, L2 cache size and high clocks to perform well. When it comes to raw power, the K7 destroys netburst clock per clock.

Tetrium wrote:
The same goes for Netburst, and in the end Intel actually did exactly that. Btw, Athlon wasn't some cheap clone of Coppermine/P6 […]
Show full quote
Scali wrote:

the architecture was a failure, and they would have been better off sticking to their previous architecture.

The same goes for Netburst, and in the end Intel actually did exactly that.
Btw, Athlon wasn't some cheap clone of Coppermine/P6 like you make it out to be.
Thunderbird was perhaps marginally better, but it ran WAY hotter! But at least it could be scaled, which is something Coppermine had lots of trouble with when trying to surpass 1GHz (and Athlon could make more effective use of DDR compared to Pentium 3).

K7 ran hot when compared to a P3 - but that's because of two factors:
- initial slot 1 release witch made it hard to install an efficient cooler
- use of crappy little socket 370 style coolers that were adequate for a P3 but not for Thunderbird.

Reply 58 of 92, by r.cade

User metadata
Rank Member
Rank
Member

I think someone answered already, but AMD was beating Intel in performance from the Athlon introduction until the Core2 CPU's finally came out.

It would be nice for competition to happen again like that time period- it was an incredible 5-6 years. I've heard about Ryzen- It's more cores, but I thought single-core performance was still not as fast as Intel. That disappointing.

Reply 59 of 92, by appiah4

User metadata
Rank l33t++
Rank
l33t++
r.cade wrote:

I think someone answered already, but AMD was beating Intel in performance from the Athlon introduction until the Core2 CPU's finally came out.

It would be nice for competition to happen again like that time period- it was an incredible 5-6 years. I've heard about Ryzen- It's more cores, but I thought single-core performance was still not as fast as Intel. That disappointing.

IPC is up there they just don't have the same clockspeeds. They will get better clockspeeds through manuf. process improvements before Intel can whip out a modern architecture out their asses so Ryzen/Threadripper/Epyc will be unmatched by anything Intel can offer next year.

Retronautics: A digital gallery of my retro computers, hardware and projects.