VOGONS


First post, by Atom Ant

User metadata
Rank Newbie
Rank
Newbie

In 1997 we saw huge performance improvement in the CPU segment, first the Pentium MMX CPUs, than soon the new Pentium II. These new Pentium II processors were not just crazy fast, but they also looked totally different than the previous ones. They were like 'Cassettes', which also meant the cooling element of the CPU. These cassettes could be easily plugged into the new Slot motherboards, eliminating the chance of benting the pins of the CPU. So they were awesome, everybody believed this is the future!

Than next year some early Pentium III processors also came in this format, but than Intel suddenly killed of these Slot motherboards and processors. The faster and newer Pentium III variants used again the good old Socket platform. Since we have only Socket processors...

I actually do not know what happened, why Intel dropped the Slot form? Somehow this part is missing from my memories. Anybody?

My high end of '96 gaming machine;
Intel PR440FX - Pentium Pro 200MHz 512K, Matrox Millenium I 4MB, Creative 3D Blaster Voodoo II 12MB SLI, 128MB EDO RAM, Creative Sound Blaster AWE64 Gold, 4x Creative CD reader, Windows 95...

Reply 1 of 27, by Srandista

User metadata
Rank Oldbie
Rank
Oldbie

Because he was able to integrate L2 cache again directly into die, rather then on PCB next to die.

Socket 775 - ASRock 4CoreDual-VSTA, Pentium E6500K, 4GB RAM, Radeon 9800XT, ESS Solo-1, Win 98/XP
Socket A - Chaintech CT-7AIA, AMD Athlon XP 2400+, 1GB RAM, Radeon 9600XT, ESS ES1869F, Win 98

Reply 2 of 27, by stamasd

User metadata
Rank l33t
Rank
l33t

Correct. The only reason Intel went with the slot form factor was because they were unable to reliably embed cache in their initial PII cores, the Klamath and the Deschutes. Thus they added the cache as separate chips on a PCB with the core, thus becoming a "slot". Once they solved the cache problem with the next core Mendocino, the slot form factor became unnecessary and soon they got rid of it.

I/O, I/O,
It's off to disk I go,
With a bit and a byte
And a read and a write,
I/O, I/O

Reply 4 of 27, by stamasd

User metadata
Rank l33t
Rank
l33t
Errius wrote:

Why did AMD produce their own incompatible slot instead of just building CPU cartridges that fit the Intel slot?

Because the slot 1 was patented by Intel. AMD were legally not allowed to make CPUs that would be pin-compatible with the Intel slot. All Intel sockets since then have been patented as well, so AMD had to go with their own slot and sockets.

I/O, I/O,
It's off to disk I go,
With a bit and a byte
And a read and a write,
I/O, I/O

Reply 5 of 27, by Koltoroc

User metadata
Rank Member
Rank
Member
Errius wrote:

Why did AMD produce their own incompatible slot instead of just building CPU cartridges that fit the Intel slot?

fun fact, the physical slots are identical, just 180 degrees reversed. completely electrically incomplete though.

Reply 6 of 27, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

Technically they could do it in socket form, after all Pentium Pro had two crystals under the hood (CPU and L2 cache). But such solution most likely is not cost effective and you need much more space on motherboard PCB.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 7 of 27, by kjliew

User metadata
Rank Oldbie
Rank
Oldbie
Errius wrote:

Why did AMD produce their own incompatible slot instead of just building CPU cartridges that fit the Intel slot?

Intel and AMD had been tangled in lawsuits ever since AMD produced the direct replacement of 386 chip that run faster than Intel's 386. Intel quickly moved into 486 and blocked/delayed AMD venture into 486 by withholding the rights to the FPU microcode. At the same time, AMD was working on clean-room designs for their future x86 chips, aka, the K6 families. However, they were still leveraged on Intel infrastructures, socket 3, socket 5 and socket 7. I believed AMD did not have rights to MMX/SSEs instructions.

Finally, both companies settled all the prior lawsuits. Both companies agreed to cross licensing all future x86 instructions. AMD gains rights to MMX/SSEs. While Intel could also technically use AMD 3DNow! instructions, they didn't see the needs to support it. This only benefited Intel much later when AMD designed the x86-64 extension for x86 for 64-bit computing. Intel's vision for 64-bit computing was Itanium, and we all know for now how that vision had flopped.

The settlement also required AMD to develop their own infrastructure for their CPUs to compete. AMD agreed to stop producing CPUs as direct replacement of Intel counterparts, otherwise the cross licensing would grant both companies access to each other patents, too. In fact, this was a good bargains for AMD in the beginning as the P6 (and later P4) FSB were hard to design without process tuning. The Athlon FSB and later Hyper-Transport Link were proven more advanced than what Intel had to show.

From my personal assessment of the settlement, Intel just wanted higher entry barriers for AMD to compete. AMD couldn't just focus on reverse-engineering Intel's counterpart and find ways to enhance it to make a product. AMD successfully outmaneuvered those barriers. However, Intel gained the advantage of freedom to design anything out of AMD specification when the ship of Itanium would inevitably sink into oblivion.

Reply 8 of 27, by konc

User metadata
Rank l33t
Rank
l33t
stamasd wrote:

Correct. The only reason Intel went with the slot form factor was because they were unable to reliably embed cache in their initial PII cores, the Klamath and the Deschutes.

Also correct 🤣 Just a small comment which doesn't change the essence of what you wrote: they weren't unable, they just wanted to cut down on cost for (the not-so-few) failing units because of the cache. If it wasn't on a board they'd had to throw the whole chip whereas with this approach they could either replace it or sell it as a cacheless celeron.

Last edited by konc on 2018-08-09, 20:10. Edited 1 time in total.

Reply 9 of 27, by stamasd

User metadata
Rank l33t
Rank
l33t
The Serpent Rider wrote:

Technically they could do it in socket form, after all Pentium Pro had two crystals under the hood (CPU and L2 cache). But such solution most likely is not cost effective and you need much more space on motherboard PCB.

The initial slot PIIs had 3 chips on the PCB: the CPU core and 2 cache chips (one to each side of the CPU). The total die area was too large to comfortably fit on a socket-sized PCB. Plus there was the added benefit of having everyone upgrade to brand new motherboards, and forbidding AMD to make electrically compatible chips. The stakes were just too high.

P2_333.jpg

I/O, I/O,
It's off to disk I go,
With a bit and a byte
And a read and a write,
I/O, I/O

Reply 10 of 27, by hyoenmadan

User metadata
Rank Member
Rank
Member
stamasd wrote:
The initial slot PIIs had 3 chips on the PCB: the CPU core and 2 cache chips (one to each side of the CPU). The total die area w […]
Show full quote

The initial slot PIIs had 3 chips on the PCB: the CPU core and 2 cache chips (one to each side of the CPU). The total die area was too large to comfortably fit on a socket-sized PCB. Plus there was the added benefit of having everyone upgrade to brand new motherboards, and forbidding AMD to make electrically compatible chips. The stakes were just too high.

P2_333.jpg

Some PIIs and later PIIIs also had the LAPIC chip, which could be used in conjunction with the optional IOAPIC in the motherboard to produce compliant MPS SMP systems. The last Slot 1 PIIIs had the LAPIC logic bundled in CPU, with only the cache chips outside, making the count reduce to 3 chips again, like the first PIIs.

Reply 11 of 27, by Scali

User metadata
Rank l33t
Rank
l33t
stamasd wrote:

Because the slot 1 was patented by Intel. AMD were legally not allowed to make CPUs that would be pin-compatible with the Intel slot. All Intel sockets since then have been patented as well, so AMD had to go with their own slot and sockets.

To be more exact, the FSB protocol was proprietary, and could not be licensed by AMD.
This was a response to AMD using CPUs that were drop-in replacements for Intel chips, so that AMD could lift along on the chipsets and motherboards for Intel CPUs.
Initially this led to the 'Super socket 7' platform, where AMD continued to use the outdated socket 7 platform, and extend its possibilities with the help of some allied chipset manufacturers.
For the Athlon they needed something better though, so AMD bought technology from Digital and used it to create their own slot A and FSB protocol. Both the FSB protocol and various parts of the Athlon execution core are based on technology from Digital's Alpha CPU line.
The result was a platform that outperformed Intel's Pentium II and III. Probably not what Intel had in mind when they blocked AMD from using their platform 😀

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 12 of 27, by 640K!enough

User metadata
Rank Oldbie
Rank
Oldbie

For consumer and small business systems, this was mostly unique to Intel and, later, AMD. The AIM alliance at the time had a similar cache arrangement, but the smaller PCB module still fit into a similar ZIF socket, with the part of the board holding the cache ICs sitting off the edge of the socket. Photos of many machines from the G3 era will show this. I thought that was a more elegant solution: smaller board, no metal plate, retention module or larger heatsink and, one would assume, lower production costs.

Reply 13 of 27, by GL1zdA

User metadata
Rank Oldbie
Rank
Oldbie
hyoenmadan wrote:
stamasd wrote:
The initial slot PIIs had 3 chips on the PCB: the CPU core and 2 cache chips (one to each side of the CPU). The total die area w […]
Show full quote

The initial slot PIIs had 3 chips on the PCB: the CPU core and 2 cache chips (one to each side of the CPU). The total die area was too large to comfortably fit on a socket-sized PCB. Plus there was the added benefit of having everyone upgrade to brand new motherboards, and forbidding AMD to make electrically compatible chips. The stakes were just too high.

P2_333.jpg

Some PIIs and later PIIIs also had the LAPIC chip, which could be used in conjunction with the optional IOAPIC in the motherboard to produce compliant MPS SMP systems. The last Slot 1 PIIIs had the LAPIC logic bundled in CPU, with only the cache chips outside, making the count reduce to 3 chips again, like the first PIIs.

Wasn't the LAPIC integrated in the CPU since the Pentium P54C (Socket 5)?

getquake.gif | InfoWorld/PC Magazine Indices

Reply 14 of 27, by nforce4max

User metadata
Rank l33t
Rank
l33t

To put it simple it was no longer needed and production cost was another factor where moving back to zif sockets was cheaper per unit to produce. Could only imagine how much of a pain in the ass having netbust in slot form would have been thermals wise.

On a far away planet reading your posts in the year 10,191.

Reply 15 of 27, by Scali

User metadata
Rank l33t
Rank
l33t
640K!enough wrote:

For consumer and small business systems, this was mostly unique to Intel and, later, AMD. The AIM alliance at the time had a similar cache arrangement, but the smaller PCB module still fit into a similar ZIF socket, with the part of the board holding the cache ICs sitting off the edge of the socket. Photos of many machines from the G3 era will show this. I thought that was a more elegant solution: smaller board, no metal plate, retention module or larger heatsink and, one would assume, lower production costs.

Mind you though, in the markets that AIM targeted, it was not common to custom-build or upgrade machines, let alone that consumers would do it themselves.
The slot arrangement made it very easy for a consumer to install their own CPU.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 16 of 27, by 640K!enough

User metadata
Rank Oldbie
Rank
Oldbie

At the time, the average consumer wouldn't have known a USB port if it got up and bit their finger, much less have had the ability to successfully replace a Slot 1 processor.

I will concede that building a custom Mac OS computer wasn't possible, let alone common, but upgrading was common enough since the 68K days. Many Mac users found it easy enough to slip an upgrade module into a PDS slot and install the software, or could have the vendor install it for them. It was common enough that there were multiple companies that specialised in such upgrades; look up Newer Technology and Sonnett Technologies, for two examples. As Apple tended more toward the disposable PC concept, they mostly adapted their product lines or went out of business.

In the Amiga revival world, there were a few PowerPC-based designs. There, too, processors were sold for installation and upgrades, always based on the ZIF socket, as far as I'm aware. Now that I think of it, such upgrade modules were fairly common in the 68K Amiga world, too.

Reply 17 of 27, by hyoenmadan

User metadata
Rank Member
Rank
Member
GL1zdA wrote:

Wasn't the LAPIC integrated in the CPU since the Pentium P54C (Socket 5)?

L_00004275.jpg

Right, the Intel S82459AD is identified in some places as TagRAM, in others as a separated L2 cache controller. But isn't a LAPIC for sure. Sorry for the misinterpretation.

Reply 18 of 27, by RaverX

User metadata
Rank Member
Rank
Member

I think it was all about the money. Slot 1 was basically a mini motherboard, it had a PCB with a connector, CPU and cache (for non coppermine CPUs) were soldered there. On top of that there was the plastic housing and very often the heatsing, plus (not always, but a lot of times) a fan. It was more expensive to make thos CPUs than the "normal" socket CPU. But it was worth it, unfortunately only for the consumer, not for Intel). It was durable - no pins on the CPU or on the motherboard.

Also the owner of a slot 1 platform could make several upgrades. Let's say you got a good BX440 motherboard in 1998, you paired that with a PII 266 (good CPU and not very expensive). In 1999 you could easily upgrade to PIII 450 or PIII 500, very good CPUs, also not very expensive. In 2000 or 2001 you could upgrade to PIII 1000 (either a slot 1 CPU or even a socket 370 CPU, woth the help of an very cheap adaspter).

And... in 2002-2003 you could upgrade even further to a PIII 1400 (with a Powerleap), if you had a very good motherboard, of course.

If Intel decided to keep the slot format for the CPUs it coudl do a lot of intersting things. Imagine that instead going P4 route they would keep the PII architecture, clocked a little highe, with cache on the CPUr, but with extra slower (but a massive amount) cache memory. Then maybe 2 or even 4 CPU on the PCB, with a huge amount of shared L3 cache... But they went Netburst 🙁

Reply 19 of 27, by 640K!enough

User metadata
Rank Oldbie
Rank
Oldbie
RaverX wrote:

It was durable - no pins on the CPU or on the motherboard.

It wasn't as durable as you might think. There were contacts in the slot, and they were relatively fragile. I have seen a few cases where people crushed the contacts within the processor slot. In almost all cases, given that replacement parts weren't generally available, it meant that the board was scrap.

RaverX wrote:

If Intel decided to keep the slot format for the CPUs it coudl do a lot of intersting things. Imagine that instead going P4 route they would keep the PII architecture, clocked a little highe, with cache on the CPUr, but with extra slower (but a massive amount) cache memory. Then maybe 2 or even 4 CPU on the PCB, with a huge amount of shared L3 cache... But they went Netburst

That approach would quickly become unmanageable in a slot design, unless they moved to some sort of multi-row, high-density pin header. With their multi-processor and multi-core designs, more pins are inevitably required. The contacts on the card edge would ultimately have to be too thin to be practical, or they would have to add more rows of contacts, making insertion and removal an even more delicate procedure. With the slot design as it was, I say goodbye and good riddance. It was generally a bad idea to begin with, and mostly just to keep competition from AMD et al at bay.