VOGONS

Common searches


To end the AMD v. Intel debate.

Topic actions

  • This topic is locked. You cannot reply or edit posts.

Reply 40 of 181, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:

It is? To me it looks like AMD just invented the 'chiplet' marketing name to do essentially the same as what Intel and others have been doing for years with multi-chip modules.

It is in this space. Conventional wisdom has been to use a single ASIC as with Netburst dual, Core 2 quad, and IBM Power MCMs. Intel's MCMs were homogenous VLSI (aside from cache on slot). Zen MCM is more akin to Intel's hub architecture on a single ceramic package. You'd probably have to look at IBM's z/Arch stuff to find similar chip on module with different dies for specific functionality.

All hail the Great Capacitor Brand Finder

Reply 43 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

Conventional wisdom has been to use a single ASIC as with Netburst dual, Core 2 quad, and IBM Power MCMs. Intel's MCMs were homogenous VLSI (aside from cache on slot).

Which is why I specifically chose Westmere.
Firstly because it is heterogeneous (CPU-die and GPU-die).
Secondly, because it mixes 32 nm and 45 nm dies on a single package, which is another advantage promoted for 'chiplets'.

The rest is up for debate... If you can slice up an APU into GPU and CPU dies, then obviously you could also slice up your CPU into multiple logic blocks and put them on separate dies.
Apparently at the time of Westmere, there was no reason to go that route. Perhaps today the scenery is different.

One could even argue that 'chiplets' go back to the origins of computing, before the concept of a CPU existed:
Multiple chips, or even multiple logic boards, were strapped together to form a 'processing unit'.

I hardly see that as 'innovative', but perhaps that's just me.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 45 of 181, by Bruninho

User metadata
Rank Oldbie
Rank
Oldbie

In this flame war, I was always on Intel side. I always used Intel CPUs for my entire life so far, and never used anything else. Okay, that's almost a lie, I used the PowerPC Macintoshes at some point. But at least I never used AMD...

"Design isn't just what it looks like and feels like. Design is how it works."
JOBS, Steve.
READ: Right to Repair sucks and is illegal!

Reply 46 of 181, by 386SX

User metadata
Rank l33t
Rank
l33t
bfcastello wrote:

In this flame war, I was always on Intel side. I always used Intel CPUs for my entire life so far, and never used anything else. Okay, that's almost a lie, I used the PowerPC Macintoshes at some point. But at least I never used AMD...

I always preferred the alternative choice, usually the one everyone criticized. Like going for the K6-2 (while I admit wasn't as fast as I expected) was an example and had only AMD cpu until the Atom N270/N450. Nowdays I don't see many differences. Also I don't have any modern components to make a config. The fastest most modern cpu I have to test or use is the Core 2 E8600 and a FM2 mobo I don't even know if it works and the best cpu for it cost too much so I don't think I'll ever build anything with it.
Just like gpus. Back in its time I liked the Rage Fury Maxx, the Savage2000 and the Kyro II.

Reply 47 of 181, by rmay635703

User metadata
Rank Oldbie
Rank
Oldbie
bfcastello wrote:

In this flame war, I was always on Intel side. I always used Intel CPUs for my entire life so far, and never used anything else. Okay, that's almost a lie, I used the PowerPC Macintoshes at some point. But at least I never used AMD...

I always had Cyrix until this century, only exception being my 1000rlx

Would still use Cyrix if they didn’t sell to Via

Reply 48 of 181, by badmojo

User metadata
Rank l33t
Rank
l33t
appiah4 wrote:

You are being obtuse on purpose surely Scali.

Of course he is, that’s his favourite game and thus there’s no point engaging IMO.

On topic, I’m loving on my Ryzen after years of loving on my i5 after years of loving on my Athlon 64 - swings and roundabouts in my book 😀

Life? Don't talk to me about life.

Reply 49 of 181, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
Which is why I specifically chose Westmere. Firstly because it is heterogeneous (CPU-die and GPU-die). Secondly, because it mixe […]
Show full quote
gdjacobs wrote:

Conventional wisdom has been to use a single ASIC as with Netburst dual, Core 2 quad, and IBM Power MCMs. Intel's MCMs were homogenous VLSI (aside from cache on slot).

Which is why I specifically chose Westmere.
Firstly because it is heterogeneous (CPU-die and GPU-die).
Secondly, because it mixes 32 nm and 45 nm dies on a single package, which is another advantage promoted for 'chiplets'.

The rest is up for debate... If you can slice up an APU into GPU and CPU dies, then obviously you could also slice up your CPU into multiple logic blocks and put them on separate dies.
Apparently at the time of Westmere, there was no reason to go that route. Perhaps today the scenery is different.

Fair enough, although an onboard GPU doesn't really comprise part of the CPU (yet).

Scali wrote:

One could even argue that 'chiplets' go back to the origins of computing, before the concept of a CPU existed:
Multiple chips, or even multiple logic boards, were strapped together to form a 'processing unit'.

I hardly see that as 'innovative', but perhaps that's just me.

I don't know what the performance penalty is using off die interconnects, but if they achieve the kind of I/O performance between dies without integration on the same wafer it delivers real benefits in terms of defect related yield, wafer utilization, and design flexibility. I suspect modern high speed serial interconnect plays a big role in making this possible.

All hail the Great Capacitor Brand Finder

Reply 50 of 181, by mothergoose729

User metadata
Rank Oldbie
Rank
Oldbie
gdjacobs wrote:

I don't know what the performance penalty is using off die interconnects, but if they achieve the kind of I/O performance between dies without integration on the same wafer it delivers real benefits in terms of defect related yield, wafer utilization, and design flexibility. I suspect modern high speed serial interconnect plays a big role in making this possible.

That is a good point. As nodes continue to shrink, achieving a good yield on large dies gets harder. Pushing down to 5nm or 3nm might necessitate a chiplet design.

I don't think this is the reason, but it is conspicuous how much better AMD has done with TSMC and 7nm on its architectures, compared intel's monolithic die approach on their fraught 10nm.

Reply 52 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

Fair enough, although an onboard GPU doesn't really comprise part of the CPU (yet).

It does actually, in this case.
The CPU and GPU share the memory controller. In the case of Westmere, the memory controller is actually on the GPU-die. So every memory access of the CPU also passes through the GPU die. I'd say that makes it "part of the CPU".
In fact, because the memory controller is on the GPU-die, nearly all I/O is on the GPU-die, including the PCIe controller etc.
See here for details: https://www.anandtech.com/show/2901

gdjacobs wrote:

I don't know what the performance penalty is using off die interconnects, but if they achieve the kind of I/O performance between dies without integration on the same wafer it delivers real benefits in terms of defect related yield, wafer utilization, and design flexibility. I suspect modern high speed serial interconnect plays a big role in making this possible.

As said, the memory controller is on the GPU-die, and the interface between CPU and memory is arguably the most high-bandwidth I/O interface that a CPU has.
There does not appear to be any tangible performance penalty for memory access on Westmere. As in, you can measure it with synthetic tests, but in practice, the caches cover it up, and actual performance is fine. Of course it doesn't help that Westmere used a 45 nm die here, with somewhat outdated technology.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 53 of 181, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
It does actually, in this case. The CPU and GPU share the memory controller. In the case of Westmere, the memory controller is a […]
Show full quote

It does actually, in this case.
The CPU and GPU share the memory controller. In the case of Westmere, the memory controller is actually on the GPU-die. So every memory access of the CPU also passes through the GPU die. I'd say that makes it "part of the CPU".
In fact, because the memory controller is on the GPU-die, nearly all I/O is on the GPU-die, including the PCIe controller etc.
See here for details: https://www.anandtech.com/show/2901

Indeed, part of the general trend in the industry to move more core logic into the cpu package.

All hail the Great Capacitor Brand Finder

Reply 54 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

Indeed, part of the general trend in the industry to move more core logic into the cpu package.

So then, can we all agree that splitting up the CPU logic into multiple dies in the CPU package (or 'chiplets', depending on marketing terminology) is not that innovative in 2019?
Westmere is prior art from early 2010.
Or am I just deliberately being obtuse?

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 55 of 181, by appiah4

User metadata
Rank l33t++
Rank
l33t++

No, you are being willfully obtuse because Westmere did not split the CPU's cores into chiplets and did not use an interposer or anything similar to the infinity fabric to mitigate the latency issues this split core approach would entail. Westmere was merely a cpu to gpu/io bridge and is in no way shape or form what AMD did with Ryzen whereby their aim was core count modularity. If Intel could just take the Westmere (which was a crude solution for attempting to integrate graphics to the CPU, not trying to split up the CPU for more cores and better yields) and leverage the techonology against AMD they would have done it by now - it has been three years. The example you cite is not related, neither in what it aims to achieve nor in how it is achieved.

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 56 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
appiah4 wrote:

No, you are being willfully obtuse because Westmere did not split the CPU's cores into chiplets and did not use an interposer or anything similar to the infinity fabric to mitigate the latency issues this split core approach would entail. Westmere was merely a cpu to gpu/io bridge and is in no way shape or form what AMD did with Ryzen whereby their aim was core count modularity. If Intel could just take the Westmere (which was a crude solution for attempting to integrate graphics to the CPU, not trying to split up the CPU for more cores and better yields) and leverage the techonology against AMD they would have done it by now - it has been three years. The example you cite is not related, neither in what it aims to achieve nor in how it is achieved.

Or... you drank the AMD koolaid.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 57 of 181, by appiah4

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
appiah4 wrote:

No, you are being willfully obtuse because Westmere did not split the CPU's cores into chiplets and did not use an interposer or anything similar to the infinity fabric to mitigate the latency issues this split core approach would entail. Westmere was merely a cpu to gpu/io bridge and is in no way shape or form what AMD did with Ryzen whereby their aim was core count modularity. If Intel could just take the Westmere (which was a crude solution for attempting to integrate graphics to the CPU, not trying to split up the CPU for more cores and better yields) and leverage the techonology against AMD they would have done it by now - it has been three years. The example you cite is not related, neither in what it aims to achieve nor in how it is achieved.

Or... you drank the AMD koolaid.

Oh yeah, ad hominem. That really helps your argument. Not that I would expect better from you.

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 58 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
appiah4 wrote:

Oh yeah, ad hominem.

Not really sure how else to answer the ad hominem of "You're being willfully obtuse" followed by some random AMD marketing propaganda with no base in reality or technology whatsoever (a chip from 2019 is more advanced than one from 2010? No shit Sherlock! But more advanced is not the same as innovative).
We're never going to agree.
You are deluded to the point that you think AMD is being innovative. Fine. I don't agree, and I have shown prior art. End of debate.

I could also point out that Westmere was not the first nor only example of Intel integrating the GPU on the CPU die, and that they have actually built single-die solutions *before* Westmere, making Westmere's two-die solution a deliberate choice for economic reasons. But none of that is going to register with you anyway.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/