VOGONS

Common searches


To end the AMD v. Intel debate.

Topic actions

  • This topic is locked. You cannot reply or edit posts.

Reply 60 of 181, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:

So then, can we all agree that splitting up the CPU logic into multiple dies in the CPU package (or 'chiplets', depending on marketing terminology) is not that innovative in 2019?
Westmere is prior art from early 2010.
Or am I just deliberately being obtuse?

Well, "innovative" is a very overused word these days. I tend to think not much is truly innovative. However, Zen was a deliberate and counter-intuitive choice to solve the problems I listed above. Westmere was a very logical and predictable step in the existing trend to further integration.

All hail the Great Capacitor Brand Finder

Reply 61 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

However, Zen was a deliberate and counter-intuitive choice to solve the problems I listed above.

I don't see how it would be counter-intuitive, to be honest.
Smaller dies -> higher yields. Doesn't get more intuitive than that. We've known this for decades.
And as said, other companies have done that many times before in the past, and currently choose to stack chips on top of eachother, in a more advanced/innovative way to solve the problems than AMD's approach.

gdjacobs wrote:

Westmere was a very logical and predictable step in the existing trend to further integration.

How is it logical and predictable?
Westmere's predecessor is Nehalem.
While there was no integrated GPU in Nehalem, they did integrate the memory controller, and in the case of Lynnfield, also the PCIe controller (the so-called PCH: Platform Controller Hub, aka northbridge).

So Westmere is not that logical and predictable in moving the memory controller/PCH back out of the CPU die, and putting it on the die that includes the GPU.
Combining a 32 nm and 45 nm die in a single package is also somewhat unpredictable. Intel had done multi-chip modules before, but those were basically two identical dies.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 62 of 181, by athlon-power

User metadata
Rank Member
Rank
Member

As far as I am concerned, Intel or AMD making the "chiplet," (in my opinion, it's a terrible design. just put it all on a card. you know, i wouldnt have to change the thermal paste every time i switched a cpu in and out of a motherboard. imagine buying a cpu and just having to remove the old one, put in the new one, and you're done. no heatsink mounting, nothing) design first isn't a big deal. This is the same as the "who had the first 1000MHz CPU," argument. Without AMD, we'd all be stuck with 4GB of RAM or less, and without Intel, we'd be stuck with only physical cores, wasting resources in the CPU that could be put to other tasks. Can AMD claim all innovations? No. Can Intel claim all innovations? No.

Without Intel, there wouldn't be x86 at all, and without AMD, Intel would've never got the big deal with IBM because IBM wanted an alternative manufacturer other than Intel. Guess who that was?

Gary Kildall should be in Bill Gates' place, Apple should have died in 1998, and IBM should've remained the PC market King.

OS/2 should be the dominant operating system, Windows should be free, and Linux should cost $4.32596678795623 Billion United States Dollars Per 0.002598352 seconds as a subscription fee. The World Wide Web should still be ran by Sun Microsystems computers, SGI should be rendering all 3D things, and the PC should be nothing more than a word processor. If you want to play a game, buy the all-new SGI Indy 676-10.000, the graphical workhorse that can render Crysis at 81920x61440 at 1.38259688 trillion frames per second, so you can see Gordon intentionally initiate the resonance cascade so that he could kill the Vortigaunts he always had an aggressive negative bias towards after he had a dream where they came into his room and smashed his IBM Aptiva he spent all of his McDonald's savings on into oblivion.

No, I don't ingest illicit chemicals, but I made this thread halfway as a joke, and halfway because both companies made innovations in their time. One will be dominant over the other in the mind of the people. Whether or not you think one is better doesn't matter, I'm not talking about one being absolutely better than the other, I'm simply saying that they switch positions in the public eye every few years, and that one is neither really better than the other if you take objective opinion into account.

Who made CP/M? Bill Gates or Gary Kildall?

What was MS-DOS based off of?

Who became the famous person, always in the public eye?

Билл Гейтс - враг народа

(i actually like msdos and windows a lot dont kill me please ive never even used cp/m)

Where am I?

Reply 63 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
athlon-power wrote:

Without AMD, we'd all be stuck with 4GB of RAM or less

Well no.
Firstly because Intel already introduced PAE, which allowed for 64 GB.
Secondly, because there have been tons of 64-bit architectures before AMD even started developing x64.

athlon-power wrote:

and without Intel, we'd be stuck with only physical cores, wasting resources in the CPU that could be put to other tasks.

Again, no. Simultaneous Multithreading was originally developed by IBM, and they already started early experiments sometime in the 1960s.

There's a lot more in the world than x86.

athlon-power wrote:

Without Intel, there wouldn't be x86 at all, and without AMD, Intel would've never got the big deal with IBM because IBM wanted an alternative manufacturer other than Intel. Guess who that was?

Actually, there were a LOT of second-source manufacturers. AMD was just one of many. There were Siemens, NEC, Harris, etc: http://www.cpu-world.com/CPUs/8088/index.html
AMD wasn't all that important.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 64 of 181, by athlon-power

User metadata
Rank Member
Rank
Member

I stand corrected on the first part.

On the second part, I was talking specifically about x86. IBM having multi-threading in the 1960's didn't change the fact that the x86 PC market didn't get that until the early 2000's, on a wide scale.

Regarding the third part, AMD was at least part of the fleet of companies that seceded from Intel, preventing them from being able to sell their processors for drastically exaggerated prices like they've always wanted to, and has become a big part recently in keeping prices low. I'd rather not have to pay US$1,000 for a low-end processor, and US$5,000 for a higher-end one.

I'm arguing in the realms of x86, because x86 is what I know. I still need to do my homework regarding other architectures and fields. Give me another five years, and I might be more competent outside of the x86 field.

Where am I?

Reply 65 of 181, by buckeye

User metadata
Rank Oldbie
Rank
Oldbie
Firtasik wrote:
DNSDies wrote:

Firtasik's charts had a $1100 Intel CPU topping the charts of a few games with Ryzen 3800x about 7-8FPS behind it, which costs about $300.

i5 9600K costs about $200 and it beats 3800x (about $370) in some games.

Yeah, this is where I'm at a crossroads building a new rig. Can't decide between those two....flip a coin?

Asus P5N-E Intel Core 2 Duo 3.33ghz. 4GB DDR2 Geforce 470 1GB SB X-Fi Titanium 650W XP SP3
Intel SE440BX P3 450 256MB 80GB SSD Radeon 7200 64mb SB 32pnp 350W 98SE
MSI x570 Gaming Pro Carbon Ryzen 3700x 32GB DDR4 Zotac RTX 3070 8GB WD Black 1TB 850W

Reply 66 of 181, by athlon-power

User metadata
Rank Member
Rank
Member
buckeye wrote:

Yeah, this is where I'm at a crossroads building a new rig. Can't decide between those two....flip a coin?

I would say that you should get whatever performs best in the fields you want it to. If you use programs (or games) that are heavily multi-threaded, I would go for whatever has more cores/threads or whatever has better multi-core performance. If your applications are less multi-threaded, you can afford to lose a few cores/threads and save some money.

I personally use an i5-9400 at the moment, upgrading from a Ryzen 5 1400. Whenever the i5 gets slow (or if I get enough money to build a more powerful computer randomly), I'll upgrade from that to whatever has the best price to performance ratio at the time. The i5 has a good enough leg up on both single and multi-core performance that it was a good upgrade from the Ryzen 5 1400.

Where am I?

Reply 68 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
athlon-power wrote:

On the second part, I was talking specifically about x86. IBM having multi-threading in the 1960's didn't change the fact that the x86 PC market didn't get that until the early 2000's, on a wide scale.

No, but it does mean that the technology was already around, so it was just a matter of "which x86 company will apply the tech to x86 first", rather than "without x86 company X we would never have Y".

athlon-power wrote:

Regarding the third part, AMD was at least part of the fleet of companies that seceded from Intel, preventing them from being able to sell their processors for drastically exaggerated prices like they've always wanted to, and has become a big part recently in keeping prices low. I'd rather not have to pay US$1,000 for a low-end processor, and US$5,000 for a higher-end one.

Pretty sure that price gouging was the last thing on Intel's mind in 1981. They basically had NO customers for the 8088/8086 (everyone else was using Z80, 6502, 6809, 68000 etc).
They needed the IBM deal, and they needed it badly. Which is the only reason why they agreed to a second-source.
They'd have to compete on price with the other CPU manufactures anyway, to get the IBM deal.
Besides I'm not sure to what extent the second-source parties could even put their own pricing on their products, as the second-source license wasn't all that flexible.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 69 of 181, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:

I don't see how it would be counter-intuitive, to be honest.
Smaller dies -> higher yields. Doesn't get more intuitive than that. We've known this for decades.
And as said, other companies have done that many times before in the past, and currently choose to stack chips on top of eachother, in a more advanced/innovative way to solve the problems than AMD's approach.

It was counterintuitive to use a less integrated solution as the engineering tendency has always been the opposite, although it's also probably an inevitable conclusion of advancements in lithography slowing down.

Scali wrote:
How is it logical and predictable? Westmere's predecessor is Nehalem. While there was no integrated GPU in Nehalem, they did int […]
Show full quote

How is it logical and predictable?
Westmere's predecessor is Nehalem.
While there was no integrated GPU in Nehalem, they did integrate the memory controller, and in the case of Lynnfield, also the PCIe controller (the so-called PCH: Platform Controller Hub, aka northbridge).

So Westmere is not that logical and predictable in moving the memory controller/PCH back out of the CPU die, and putting it on the die that includes the GPU.
Combining a 32 nm and 45 nm die in a single package is also somewhat unpredictable. Intel had done multi-chip modules before, but those were basically two identical dies.

Intel didn't have a true successor to Penryn and the G45 chipset in the value and business segments until Arrandale and Clarkdale came out, so I'm not sure the analogy holds. It's possible that Iron Lake in some way required a local memory controller, but Intel might have moved the IMC for yield reasons rather than simply bolting the IGP on a die shrink of Lynnfield. Even so, the 'dales have shown to be a blip in the progression at Intel whereas Zen appears to be a wholesale move towards package level integration.

As to your other point, Vertical stacking is inappropriate in more thermally demanding applications when using air cooling. It's also nothing new under the sun.
800px-Cray-2_module_side_view.jpg

All hail the Great Capacitor Brand Finder

Reply 70 of 181, by 386SX

User metadata
Rank l33t
Rank
l33t
buckeye wrote:
Firtasik wrote:
DNSDies wrote:

Firtasik's charts had a $1100 Intel CPU topping the charts of a few games with Ryzen 3800x about 7-8FPS behind it, which costs about $300.

i5 9600K costs about $200 and it beats 3800x (about $370) in some games.

Yeah, this is where I'm at a crossroads building a new rig. Can't decide between those two....flip a coin?

That's why I feel like it's a bit strange that the market always end up with two companies and not more alternatives.

Reply 71 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

Intel didn't have a true successor to Penryn and the G45 chipset in the value and business segments until Arrandale and Clarkdale came out, so I'm not sure the analogy holds.

I'm pretty sure it does.
You are trying to make an economical argument (using some arbitrary metric like value/business segment) in what is a technical discussion (where the same generation of technology is generally scaled towards the various segments).

gdjacobs wrote:

As to your other point, Vertical stacking is inappropriate in more thermally demanding applications when using air cooling. It's also nothing new under the sun.

Now you are confusing things.
You are showing an example of PCB-level stacking. Which is clearly different from die-stacking.
As for vertical stacking being inappropriate... That depends on many factors.
There's an entire memory-standard based on the concept: HBM. Stacking makes it easy to create very dense interconnects between two dies, which is great for high-performance I/O.

Of course there are limitations to every concept, but the beauty is that multiple concepts can be combined: You can create multiple 'islands' of stacked chips, and combine the best of both worlds.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 72 of 181, by imi

User metadata
Rank l33t
Rank
l33t
Scali wrote:

Now you are confusing things.
You are showing an example of PCB-level stacking. Which is clearly different from die-stacking.

about as similar as comparing AMDs multi-chip CPU architecture with a GPU and CPU die on the same PCB 😜

Reply 73 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
imi wrote:

about as similar as comparing AMDs multi-chip CPU architecture with a GPU and CPU die on the same PCB 😜

Firstly, it's not a PCB, it's a CPU package.
Secondly, it's not just a GPU and CPU, as already discussed before. The GPU part also includes the memory controller and northbridge functionality.
That makes it a multi-chip CPU architecture, period. The CPU cannot function without combining functional units from both dies. Each die in itself is not an independently functional module. Which basically is exactly what AMD is doing, even if AMD perhaps chose to make the subdivision on slightly different groups of units. That doesn't make it innovative. Which is the only reason why I brought up Westmere in the first place: prior art of the concept of using multiple dies to form a single processing unit.

To quote Wikipedia on Zen 2:

Zen 2 moves to a multi-chip module design where the I/O components of the CPU are laid out on its own, separate die, which is also called a chiplet in this context.

Westmere does exactly that: the I/O components are on the 'GPU' die, not on the CPU die.

As physical interfaces don't scale very well with shrinks in process technology, their separation into a different die allows these components to be manufactured using a larger, more mature process node than the CPU dies.

Westmere does exactly that:
The CPU die is 32 nm, and the GPU die is 45 nm.

So really, why exactly did you make this post, with these obvious factual errors?
I mean, an AMD vs Intel debate, fine... But can we at least stick to the facts?
It's incredibly annoying to have to keep explaining the same thing over and over again because the AMD side is ... being willfully obtuse (either that, or basically just clueless about what a modern CPU actually consists of, and can't look further than the overly dumbed-down view of Westmere as "CPU and GPU die").

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 74 of 181, by imi

User metadata
Rank l33t
Rank
l33t
Scali wrote:

But can we at least stick to the facts?

sure...

west_1.jpg
Filename
west_1.jpg
File size
42.57 KiB
Views
590 views
File license
Fair use/fair dealing exception

see that green square thing under the dies? that is in fact a PCB 😀 oh the horror.
combine that with the dies and the heatspreader (and maybe a few capacitors, solder/TIM) and you completed the package.

yes, westmere has the memory controller in the gpu die, we established that some pages ago already, as you were very adamant about that 😀 that still doesn't make it anything like AMD's zen architecture, apart from that it is factually yes, a multi-chip (MCM) design... and I mean yeah, they both use silicon I guess, so there's at least one more similarity.

and yeah MCMs have existed for quite a while, so have bricks, and yet architects come up with new ideas to combine them.

just like HBM and the before posted picture are factually both stacked circuits, of course, one of them is stacked silicon and one of them has a bunch of other stuff in between, but hey the design principle behind it it similar, so just as good as your analogy 😀

you really can't let it go can you? 🤣
why did I make that post? because entertainment 🤣

Reply 75 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
imi wrote:

see that green square thing under the dies? that is in fact a PCB 😀 oh the horror.

It is not, actually. Calling it a PCB is extremely misleading in this context.
PCB: Printed Circuit Board.
This is not a 'circuit board' by any stretch of the imagination. It's a CPU package with 2 dies. There's no 'circuit' to speak of other than the interconnects between dies and socket.

If you would go the PCB route, then you could factor in any number of mainboards with the CPU and GPU on them. That is a different class of hardware. This CPU package is exactly the same as the AMD Zen 2 one, for all intents and purposes, even if the used materials may differ.

imi wrote:

that still doesn't make it anything like AMD's zen architecture

How is it 'not anything like' when I basically quoted the definition of what makes Zen 2 from Wikipedia, and correlated that to Westmere's features?
Right, it's exactly everything like Zen 2. You're just wrong. Desperately trying to make AMD look innovative, desperately trying to rewrite history.
Instead of providing arguments to argue your case (or hey, here's a novel idea: admit that I'm right about Westmere and its similarities to Zen 2), you just throw strawmen out there, and derail the discussion in immature ways.

imi wrote:

you really can't let it go can you? 🤣
why did I make that post? because entertainment 🤣

Just go fuck yourself.
I'm trying to have a mature conversation here, trying to teach people some interesting facts about CPU designs, past and present...
I see really no reason why I would have to 'let go'.
And all we get are stupid AMD fanboys and trolls.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 77 of 181, by mothergoose729

User metadata
Rank Oldbie
Rank
Oldbie

I don't know why we are wasting so much time talking about AMD and Intel.

VIA's 'Isaiah ' cores are the pinnacle of technological achievement.

-Industry leading TSMC 40nm processes!
-Blazing fast monolithic dual core!
-Revolutionary dual-die architecture allowing for up to 4 total cores!
-Never before seen big/little ALU architecture allowing for peak performance and efficiency!
-No just one, but two L1 caches!
-Industry first out-of-order and superscalar processesing!
-Integrated media controllers and encryption!

This is what the best CPU architecture in the world looks like

VIA_Isaiah_Architecture_die_plot.jpg

Does intel or AMD have ground breaking pictures like this one? I thought not.

Check out these benchmarks showing VIA absolutely demolishing, crushing, killing, destroying, and headshotting the competition with one simple trick that CPU companies HATE

(hint, its uncompromising performance)!

There is nothing fishy about this graph. Look at how VIA DOMINATES Intel and AMD in CINEBENCH R15
file.php?mode=view&id=72476&sid=af4b1d446ab42110c94b241ca2cfc0f4

A completely fair comparison that is missing absolutely no contextually appropriate information.
crystalmarkatomvsvia1.PNG

(I was going to include Crystalmark FPU, but FP is for pussies).

And look, gaming performance! How much performance you say? And which games? ALL OF IT. All of the gaming.
file.php?mode=view&id=72478&sid=af4b1d446ab42110c94b241ca2cfc0f4

On top again! Boya, winning!
https://www.youtube.com/watch?v=UaUa_0qPPgc

I rest my case. Let's get this discussion back on track, which should include how long can Intel and AMD keep deceiving the public, and when will dirty casuals finally start buying VIA powered servers?

Attachments

  • Filename
    Untitled.png
    File size
    6.16 KiB
    Downloads
    No downloads
    File license
    Fair use/fair dealing exception
  • Cinebench-R11.5d.png
    Filename
    Cinebench-R11.5d.png
    File size
    16.43 KiB
    Views
    558 views
    File license
    Fair use/fair dealing exception