WDStudios wrote on 2021-06-28, 10:20:
For one thing, the vast majority of the changes that were made from the Pentium Pro to Penryn were allowed by miniaturization, n […]
bZbZbZ wrote on 2021-06-26, 20:34:
Can you explain to us again how the research & development work that Intel did to develop the Pentium Pro into the Core 2 Penryn over a span of ~13 years demonstrates that taking old designs like K8 + R350 and remaking them on 7nm is cheap?
For one thing, the vast majority of the changes that were made from the Pentium Pro to Penryn were allowed by miniaturization, not required by it. Trying to include an on-die level 2 cache or integrated memory controller during the 600 nm era would have resulted in very large die sizes and therefore unacceptably low yields. You could easily make a P6 on the 45 nm process without those features. Instruction sets like MMX and SSE? Also not mandated by newer fabrication processes. What architectural changes had to be made in order to accommodate smaller nodes?
Second, if there's significant difficulty in adapting planar CMOS designs for FinFET processes, then just replace "7 nm" with "whatever the smallest planar CMOS process is".
debs3759 wrote on 2021-06-26, 20:41:
I just took another look through the whole thread, and the only specific example I can find which you mentioned is the Epia P910
Actually the two examples I gave were the K7 -> K10 and Pentium Pro -> Penryn.
First of all, the discussion wasn't about whether this is technically possible as much as it is financially feasible - you are still trying to conflate doable with easy or cheap, and you failed to cite any previous examples of this being done successfully in the same magnitude, as, say, shrinking from 130 nm to something like 32 nm (the P54C shrink for Intel's Xeon Phi/Knights Landing isn't simply a straight-up shrink). Why yes, shrinking process nodes are done all the time, but they are at most 2 steps within the same design, and it also require review by engineers who has to validate the chip after the die shrink so it conforms to the design rules for the new process node (which involve feeding it through EDA tools to ensure that the process change doesn't do something like mess with signal timing, certain features don't end up below minimum spacing or if the alteration of a silicon doesn't create thermal/capacitance issues in a single spot where there previously are none) - and those people don't work for peanuts. If there are changes that needs to be made then a new photomask has to be created for the chip, which in itself isn't cheap.
Even if it's minimal changes you are still talking about taking the resultant work product and sending it off to a foundry (assuming that you book time to fab it - good luck doing that in the middle of the 2021 microchip shortage). At this stage it's still going to be at least 5 digits in outlay, and whatever foundry you are dealing with will still want a minimal commitment for a production run - even at 10 wafers at the usual 300mm diameter wafer size assuming a 84% yield on a 196mm^2 die (typical), that's at least 2000 dies that needs to be tested, validated, packaged and shipped. This isn't like taking new old stock silicon from some broker in Shenzhen and creating a new board for it via PCBWay - which BTW works just fine getting old gear up and running for a modern audience.
Then you want to wire up the CPU core to an old GPU core and an old chipset, which might or might not be built using a similar process which might need some work done to port it over. Then you'll need to hire someone to lay out the floorplan, create the interconnects, validate them, finalize it on a new photomask, set up the packaging for it, and then book time with a foundry to fab it. Oh. And I am assuming that you have a suitcase full of money that you are prepared to spend for any of this. Unless by any chance that resultant chip can beat out an nVidia Tesla P100 for doing Monte Carlo trade algo back-test, in which case, you make back your money and become a silicon god (hint: it can't and won't. If Intel clustered 60 P54C cores with the Xeon Phi and everyone in Fintech pretty much ignored it, I doubt whatever rehash of a 20 year old hardware design will make anyone sit up and take notice - what, will clustering a whole bunch of K8/R350 silicon clustered together suddenly that much better to a data scientist or a quant?) Otherwise the only people who will be interested in it will be those who are not nearly pedantic enough to worship their beige towers, but choose to ignore all the other heavily depreciated hardware found on old settop boxes, thin clients and netbooks which can be had for a song and would fit 95% of the usage scenarios (assuming that you don't want pure DOS audio. Getting ISA soundblaster compatibility on those things means that you actually have to pay attention to the hardware components (certainly not the SB600 or any ATI southbridges), but ain't nobody got time for that).
Or, you know, you can call up AMD and be like:
"Hey Lisa, I need you to take a 20 year old CPU design, port it to 32nm, take a 20 year old GPU design, port that to a similar process node, add a compatible 20 year old southbridge, lay it down on a floorplan, assign a team of engineers to validate the design, create a photomask, setup the packaging, and call GlobalFoundary....
How many do I want? Oh, anywhere between 500 to 1500 chips. And can you make it obscenely cheap, like 2-3 dollars a chip? It needs to go into a bunch of SoC computers that we are selling on the cheap - you can do it because it's 20 year old tech and you can just Ctrl+C/Ctrl+V off your old APU designs, right?
...Hello? Hello? "