VOGONS

Common searches


To end the AMD v. Intel debate.

Topic actions

  • This topic is locked. You cannot reply or edit posts.

Reply 121 of 181, by mothergoose729

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
Depends on which rumours you want to include/exclude I suppose :) This was a rumour/leak 3 days ago, about the 14++ nm Comet Lak […]
Show full quote
mothergoose729 wrote:

The rumor is that Intel is abandoning their 10nm node entirely for desktop and HEDT. They won't release high performance parts until 2022 - on a 7nm node

Depends on which rumours you want to include/exclude I suppose 😀
This was a rumour/leak 3 days ago, about the 14++ nm Comet Lake-S I mentioned: https://wccftech.com/intel-comet-lake-desktop … k-z490-spotted/

And in that same article they also link to this article from Nov 1st: https://wccftech.com/intel-10nm-desktop-cpus- … ing-early-2020/
And it claims to include an official statement from Intel, which is allegedly a response to the rumour that Intel would have abandoned 10 nm altogether, which very specifically confirms desktop SKUs on 10 nm next year.
So 2020 or 2022? You tell me.

Intel does have 10nm parts for mobile, so it isn't totally vaporware. Right now it seems like intel has had trouble reaching acceptable yields and reaching decent clock speeds. I think that if intel can't produce a part that competes in servers and HEDT they won't bother, which means that die size will have to increase a lot to accommodate more cores. If they can manage that in 2020 that is great but there are many good reasons to be skeptical.

As for the prospects of a new architecture on 14nm, I am skeptical there too. How much faster will it actually be than coffee lake? We will see 😀. My expectations are that 2022 will be the most competitive year. I plan to make a purchase then, as I don't expect performance to improve much or that quickly after both intel and AMD are on 5nm and 7nm nodes.

Reply 122 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
mothergoose729 wrote:

Intel does have 10nm parts for mobile, so it isn't totally vaporware.

Well, the article only speaks of 'desktop' parts, it doesn't say whether they'd be low-end, mainstream or high-end.
It would somehow make sense to build desktop parts, because they're generally less sensitive to power consumption etc.
And it would make sense for Intel to continue producing chips in some volume, because that's the only way to fine-tune the process and iron out any issues.
I don't think a manufacturer would abandon any node that quickly, given the extreme cost of retooling your manufacturing facilities to a new node.

mothergoose729 wrote:

As for the prospects of a new architecture on 14nm, I am skeptical there too. How much faster will it actually be than coffee lake? We will see 😀.

It reminds me of NVIDIA with the GeForce and the 900-series. They couldn't get another die-shrink, because TSMC was behind on their development.
So NV released an architectural update on 28 nm instead, and it was a remarkable jump in performance and power consumption.

Intel could do something similar: (some of) the architectural improvements that they were planning for the update after the 10 nm shrink, could be implemented on 14 nm instead.
Intel has shown time and time again with their tick-tock strategy that they could get 8-18% improvements on a given architectural update.
As I said before, there are things 'stuck in the pipeline'. They haven't been sitting still at Intel just because the 10 nm manufacturing is having issues.
If they can't come out after 10 nm, they can move some of them to 14 nm instead.

mothergoose729 wrote:

I plan to make a purchase then, as I don't expect performance to improve much or that quickly after both intel and AMD are on 5nm and 7nm nodes.

I wonder if Intel's Foveros (3d stacking of chiplets) will offer any significant improvements once the technology matures.
They currently have one CPU on the market with that technology: Lakefield.
It's a 10/14 nm pancake of big/little cores, and then DRAM on top of that.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 123 of 181, by mothergoose729

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

I don't think a manufacturer would abandon any node that quickly, given the extreme cost of retooling your manufacturing facilities to a new node.

That is an excellent point. If intel were retooling for 7nm, it would be hard to do in secret.

Scali wrote:

It reminds me of NVIDIA with the GeForce and the 900-series. They couldn't get another die-shrink, because TSMC was behind on their development.
So NV released an architectural update on 28 nm instead, and it was a remarkable jump in performance and power consumption.

Maybe I am misunderstanding your point. Maxwell was 28nm, while pascal was 16nm TSMC and then later 14nm Samsung.

Scali wrote:

I wonder if Intel's Foveros (3d stacking of chiplets) will offer any significant improvements once the technology matures.
They currently have one CPU on the market with that technology: Lakefield.
It's a 10/14 nm pancake of big/little cores.

There is always a danger in saying that we have reached or are nearing the end of the road in terms of CPU performance. Given the current trend lines though, skylake was released in 2016, with skylake X coming out in 2017. If you bought a skylake X 8 core CPU and overclocked it to 4.4ghz, you would have basically the same performance as a coffee lake 9900k or a 3700x four years later. In the future, I expect the pace of performance to only get slower. Maybe a skylake X equivalent in 2022 could last six or eight years before a significant upgrade is available.

Reply 124 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
mothergoose729 wrote:

Maybe I am misunderstanding your point. Maxwell was 28nm, while pascal was 16nm TSMC and then later 14nm Samsung.

Maxwell was the second 28 nm architecture. That's what I meant. Kepler was already 28 nm (as well as some late Fermi models), and Maxwell was supposed to be 16 nm.

mothergoose729 wrote:

There is always a danger in saying that we have reached or are nearing the end of the road in terms of CPU performance.

Indeed, people thought the end was near with Pentium 4... but then focus moved from clockspeed to IPC and multicore, so we still made major strides.
We now see chiplets and 3d stacking as ways to try to eke out more gains from the same manufacturing.
I somehow don't think companies like Intel and AMD will blindly run into a wall of manufacturing. As long as they still have new architectures and nodes on the roadmap, apparently their feasibility studies told them it was possible. They could be off by one generation. But I don't think it's going to come to a grinding halt overnight.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 125 of 181, by hail-to-the-ryzen

User metadata
Rank Member
Rank
Member

https://www.cpubenchmark.net/high_end_cpus.html

Ranked by CPU Mark performance below:

CPU Name                        CPU Mark        Single Thread Rating
AMD EPYC 7742 47,365 2296
AMD Ryzen Threadripper 3960X 47,050 3019
AMD EPYC 7702P 46,067 2259
AMD EPYC 7452 38,257 2233
AMD Ryzen 9 3950X 35,913 2997
Intel Xeon W-3175X @ 3.10GHz 33,538 2006
AMD Ryzen 9 PRO 3900 32,899 3019
AMD Ryzen 9 3900X 31,916 2931

https://finance.yahoo.com/quote/AMD/history?p=AMD

It is reflected in the AMD stock price:

Date            Price
Dec 06, 2019 40.10
Oct 31, 2019 34.37
Aug 31, 2019 30.83
Jun 30, 2019 31.79
May 31, 2019 28.75
Mar 31, 2019 26.42
Feb 01, 2019 24.61
Jan 01, 2019 18.01

Reply 127 of 181, by Bruninho

User metadata
Rank Oldbie
Rank
Oldbie

Meanwhile...

https://www.techspot.com/review/1955-ryzen-39 … -9900ks-gaming/

"Design isn't just what it looks like and feels like. Design is how it works."
JOBS, Steve.
READ: Right to Repair sucks and is illegal!

Reply 128 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
bfcastello wrote:

Yea, that shows what was discussed above... Apparently I/O is better on the Intel platform, in this case much lower memory latency by default (their chart says "higher is better", but it shows latencies in ns, so clearly they mean "lower is better", especially since the tweaked settings show lower ratings than the stock settings).
With tweaking, the Ryzen gets closer, but still not enough to match the game performance.
~40 ns latency vs ~64 ns is quite a significant difference.

Games seem to take virtually no advantage of the double core count of the Ryzen (as they say, you might as well get the cheaper 3900x for games).

Out of the box the 3950X was found to be 6% slower on average when compared to the 9900KS, exactly the same margin seen between […]
Show full quote

Out of the box the 3950X was found to be 6% slower on average when compared to the 9900KS, exactly the same margin seen between the 3900X and 9900K when using the slower DDR4-3200 memory (data from a previous review). Then with the tuned memory the 3950X was 4% slower on average which for all practical purposes is virtually an identical gaming experience in all modern titles.
...
This testing also confirms the Core i9-9900K remains a top gaming CPU and the fastest for the price. However, we wouldn't call it the ultimate solution simply because it doesn’t hold the performance crown by a significant margin.
...
When compared to the 3900X which costs about the same, it’s ~5% faster on average for gaming, but the Ryzen 9 comes with a cooler and it’s miles faster in core heavy applications, anywhere from 20 to 60% faster. We also believe the extra cores will future proof the Ryzen better, but that’s probably less of a concern for those buying now as you’ll likely upgrade in 3 to 4 year's time.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 129 of 181, by SirNickity

User metadata
Rank Oldbie
Rank
Oldbie
appiah4 wrote:

Stop posting lies. Scali can't be wrong.

Umm, about what, exactly? Seems to me, he's optimistic about Intel's future, which isn't something you can be "wrong" about until well after the fact. The rest of this thread has touched on "Which company is more innovative?" and "Which is faster?" and "Which is a better value?" Correspondingly, the answers being "there isn't even a metric for this, so it isn't qualitative and hence doesn't align with 'right' or 'wrong'" and "either one, depending on the metric you use" and "depends on what your goals are."

So, if you have an axe to grind, can you just get on with it? The hair-pulling in these pointless and ultimately undefinable arguments is tiresome. Let these guys ponder, pontificate, and predict to their heart's content. It's interesting banter to some.

Reply 130 of 181, by DNSDies

User metadata
Rank Member
Rank
Member
Firtasik wrote:

Overclocked i5 9600K is faster in every game tested there.

Ryzen scales SPECTACULARLY with faster RAM.
Considering this test uses DDR4-3200 CL14 ram, using faster ram (like 3600) and tweaking the timings to get CL13 would show a notable improvement on the Ryzen system.

Reply 131 of 181, by pixel_workbench

User metadata
Rank Member
Rank
Member

To me, AMD's modular design of Zen2 is a choice resulting from technological and economical constraints of the time. Similarly to placing L2 cache next to the cpu, running at half the speed, and combining it all in a slot package in the mid-late 90s. I don't necessarily see it as some paradigm shift innovation that all future cpus will follow.

But regarding the original post, I always find it funny when people say the pentium 4 was competitive with AMD cpus. The P4 only started to be competitive when they moved the Northwood to to 533MHz fsb, but at a higher price for similar performance. The P4 was actually superior for desktop users only in the brief time period when they moved to 800MHz fsb, but before the A64 showed up. Reading that the P4 was competitive with anything based on K8 just makes me laugh.

My Videos | Website
P2 400 unlocked / Asus P3B-F / Voodoo3 3k / MX300 + YMF718

Reply 132 of 181, by SirNickity

User metadata
Rank Oldbie
Rank
Oldbie

I dunno, I think it depends on your use case.

On one end of the spectrum, the performance difference wasn't night and day, so unless you're comparing benchmarks or calculating Total Cost of Integer Operations and normalizing it such that you can determine whether one or the other cost $5 more or less per so many ops / sec., then it just doesn't even matter at all. It's not like the gap was 386SX-to-Pentium wide.

On the other end of the spectrum, when these were current products, I had my flag firmly planted in the Intel camp, and my colleague at a computer store was a rabid AMD fan. Then the Northwoods came out, and he took one home just to see if it was worthy of even being on the showroom floor. Came back the next day, I asked him how it went, he said "terrible.." .... then busted out laughing and admitted that in fact, it had overclocked like a mother and was screaming fast compared to his previous Athlon XP build. (Again, we're probably talking about high single-digit to low double-digit % differences overall, but that was enough to win him over.)

The biggest differentiator, IMO, was the period leading up to the Northwoods. So423 was a mistake. RDRAM was a mistake. It took forever for Intel to release a DDR chipset, with high-latency and expensive RDRAM and slow SDRAM as the only options. OTOH, AMD was plagued by having cowboy vendors designing their chipsets -- SiS, VIA, ALI... Nothing held a candle to the predictable performance and reliability of an Intel chipset until nVidia promised to change the game. (And then it didn't, really.) If you were after raw speed or lowest cost, you could build an AMD hot-rod at a reasonable price. Or you could toss some Intel parts at the general direction of a case, give it a good shake to jostle everything to where it should be, and it would just work without all the care, feeding, maintenance, and high-RPM banshees. You had the choice, and that's great.

Reply 133 of 181, by carlostex

User metadata
Rank l33t
Rank
l33t

Supposedly, Zen 3 (Vermeer) is moving away from CCX's and going with monolithic 8 core chiplets. That should help reduce cache latency significantly.

6vc1l05.png

Last edited by carlostex on 2019-12-18, 21:51. Edited 1 time in total.

Reply 134 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
pixel_workbench wrote:

To me, AMD's modular design of Zen2 is a choice resulting from technological and economical constraints of the time. Similarly to placing L2 cache next to the cpu, running at half the speed, and combining it all in a slot package in the mid-late 90s. I don't necessarily see it as some paradigm shift innovation that all future cpus will follow.

I agree with what you say in general, but I will add that the earlier developments were a result of Moore's law giving us more and faster circuits.
Adding cache to package or chip was possible because it became small and cheap enough. So basically, restraints were broken.

I think we are currently near the end of Moore's law, so it could be that there is no 'way back' to big monolithic dies that include everything.

pixel_workbench wrote:

But regarding the original post, I always find it funny when people say the pentium 4 was competitive with AMD cpus. The P4 only started to be competitive when they moved the Northwood to to 533MHz fsb, but at a higher price for similar performance.

I think you gave the reason already: you factor in price.
If you don't factor in price, and judge Pentium 4 purely on performance, then yes, it was competitive with AMD CPUs.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 135 of 181, by appiah4

User metadata
Rank l33t++
Rank
l33t++
SirNickity wrote:

The biggest differentiator, IMO, was the period leading up to the Northwoods. So423 was a mistake. RDRAM was a mistake. It took forever for Intel to release a DDR chipset, with high-latency and expensive RDRAM and slow SDRAM as the only options. OTOH, AMD was plagued by having cowboy vendors designing their chipsets -- SiS, VIA, ALI... Nothing held a candle to the predictable performance and reliability of an Intel chipset until nVidia promised to change the game. (And then it didn't, really.) If you were after raw speed or lowest cost, you could build an AMD hot-rod at a reasonable price. Or you could toss some Intel parts at the general direction of a case, give it a good shake to jostle everything to where it should be, and it would just work without all the care, feeding, maintenance, and high-RPM banshees. You had the choice, and that's great.

This is bullshit. Every one of VIA's K7 (KT133A onwards) and K8 chipsets were rock solid and extremely well performing. Chipset issues were a thing in Slot-A and very early Socket A period, but AthlonXP and beyond AMD never was wanting for a solid chipset. This myth is as old as ATI/AMD drivers suck. As for nForce - to this day I hate nForce 1/2/3 chipsets with a passion; they were fast but quirky, buggy pieces of shit that had some kind of incompatibility with anything you put on them.

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 136 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
SirNickity wrote:

It took forever for Intel to release a DDR chipset, with high-latency and expensive RDRAM and slow SDRAM as the only options.

Yes, RDRAM meant that Intel had to have an exclusive high-end deal, and they couldn't offer DDR until that deal ran out.
The SDRAM chipsets actually had dormant DDR support, which was simply enabled once the RDRAM exclusivity deal was over.

It's easy to say now that RDRAM was a mistake. But fact of the matter is that it had excellent performance, better than DDR (which explains why it was also used by Sony for the PlayStation 2 for example). The main issue was cost.
But if there had been no low-cost alternative from AMD, it could have been a different story. RDRAM could have become the standard, could have had widespread adoption, cost could have come down, and exclusivity deals would no longer be an issue.
I think technically RDRAM was a good choice at the time, for the Pentium 4 platform. It delivered the bandwidth that a deeply pipelined CPU at high clockspeeds like the Pentium 4 required. Latency wasn't an issue for the P4 design, the huge caches dealt with that.
The business deal was not so good, as was the price. But as I say, they would be moving targets.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 138 of 181, by appiah4

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:

Yes, RDRAM meant that Intel had to have an exclusive high-end deal, and they couldn't offer DDR until that deal ran out.

Yet another case of Intel choosing to fuck over a market they had monopoly on for personal gain and felt they could dictate whatever costs they whimsically willed onto the customers. But then, who would be surprised.

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 139 of 181, by Scali

User metadata
Rank l33t
Rank
l33t
The Serpent Rider wrote:

Dual channel DDR could be better.

Actually, no.
When RDRAM was introduced onthe P4, it had considerably higher bandwidth than DDR. RDRAM ran at 800 MHz, 16-bit, effectively delivering 3.2 GB/s in the dual channel setup of a P4.
DDR single channel (32-bit) was originally 266 MHz, which delivered only 2.1 GB/s (dual channel didn't arrive until years later).
The update to 333 MHz still only came up to 2.7 GB/s.
Eventually DDR became faster, but that was mainly because RDRAM was abandoned anyway, and no further development happened to chipsets and RAM modules (there has only been one chipset for RAMBUS, which was the i850, the chipset that the P4 launched with, which only had a small update from PC800 to PC1066 memory support in the i850E).

As you can see here, RDRAM performance was better than DDR at the time:
https://hothardware.com/reviews/asus-p4t533-i … -with-32?page=3

Last edited by Scali on 2019-12-19, 10:28. Edited 2 times in total.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/