VOGONS

Common searches


First post, by robertmo

User metadata
Rank l33t++
Rank
l33t++

https://www.bloomberg.com/news/articles/2020- … highest-end-pcs

For its future high-end laptops and mid-range desktops, Apple is testing 16-core and 32-core graphics parts.

For later in 2021 or potentially 2022, Apple is working on pricier graphics upgrades with 64 and 128 dedicated cores aimed at its highest-end machines, the people said. Those graphics chips would be several times faster than the current graphics modules Apple uses from Nvidia and AMD in its Intel-powered hardware.

Last edited by Stiletto on 2020-12-09, 01:13. Edited 3 times in total.
Reason: failed to provide citation

Reply 3 of 17, by Caluser2000

User metadata
Rank l33t
Rank
l33t

I wonder when b0rito and *486* will join in 😀.

Last edited by Stiletto on 2020-12-09, 01:14. Edited 1 time in total.

There's a glitch in the matrix.
A founding member of the 286 appreciation society.
Apparently 32-bit is dead and nobody likes P4s.
Of course, as always, I'm open to correction...😉

Reply 5 of 17, by Stiletto

User metadata
Rank l33t++
Rank
l33t++

Last edited by Stiletto on 2020-12-08, 20:13. Edited 3 times in total.
Reason: failed to provide citation

fix your darn stuff, robertmo.

"I see a little silhouette-o of a man, Scaramouche, Scaramouche, will you
do the Fandango!" - Queen

Stiletto

Reply 6 of 17, by appiah4

User metadata
Rank l33t++
Rank
l33t++

I look forward to this actually happening so Apple can finally go back to being an irrelevant nuisance in the computing world.

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 7 of 17, by Cyberdyne

User metadata
Rank Oldbie
Rank
Oldbie

This is a Fact that ARM has power to computation ratio better than x86/AMD64 CPUs. But for 3D it is totally another game. And there are very power efficient 3D AMD/Nvidia cards, they are not the latest generation thou, because AMD/Nvidia want to suck your wallet dry for all that AAA 4K gaming glory.

I am aroused about any X86 motherboard that has full functional ISA slot. I think i have problem. Not really into that original (Turbo) XT,286,386 and CGA/EGA stuff. So just a DOS nut.
PS. If I upload RAR, it is a 16-bit DOS RAR Version 2.50.

Reply 8 of 17, by Caluser2000

User metadata
Rank l33t
Rank
l33t
Cyberdyne wrote on 2020-12-09, 11:43:

This is a Fact that ARM has power to computation ratio better than x86/AMD64 CPUs. But for 3D it is totally another game. And there are very power efficient 3D AMD/Nvidia cards, they are not the latest generation thou, because AMD/Nvidia want to suck your wallet dry for all that AAA 4K gaming glory.

I'll remind my late '80s early '90s Acorn risc systems of that. The 486s will be afraid I'll use them less in future.....

There's a glitch in the matrix.
A founding member of the 286 appreciation society.
Apparently 32-bit is dead and nobody likes P4s.
Of course, as always, I'm open to correction...😉

Reply 9 of 17, by Cyberdyne

User metadata
Rank Oldbie
Rank
Oldbie

Thing was that in 486 times much programming was platform specific. Now in reality, almost everything is portable. Hey they even have Windows 10 in ARM now 😁

I am aroused about any X86 motherboard that has full functional ISA slot. I think i have problem. Not really into that original (Turbo) XT,286,386 and CGA/EGA stuff. So just a DOS nut.
PS. If I upload RAR, it is a 16-bit DOS RAR Version 2.50.

Reply 10 of 17, by appiah4

User metadata
Rank l33t++
Rank
l33t++

There have been a lot of good articles and videos about why the Apple's computation power comparisons to contemporary x86-64 Ryzen CPUs is UTTER rubbish, if anyone actually wants to learn whether Apple's silicon is up to snuff. The short answer is: NO, it is not. All those so called single-threaded benchmarks they put out are basically their multi-core vs x86 single core. The Ryzen multi core performance will demolish Apple's shitty CPUs.

As for GPUs.. I'll just put this here:

tenor.gif?itemid=9628866

Cyberdyne wrote on 2020-12-09, 13:57:

Thing was that in 486 times much programming was platform specific. Now in reality, almost everything is portable. Hey they even have Windows 10 in ARM now 😁

Yes, it exists, and it's Sooooo good that people would rather virtualize an x86 environment and run Windows 10 that way instead. (And delude themselves into thinking it will actually run great)

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 11 of 17, by Cyberdyne

User metadata
Rank Oldbie
Rank
Oldbie
appiah4 wrote on 2020-12-09, 14:12:

Yes, it exists, and it's Sooooo good that people would rather virtualize an x86 environment and run Windows 10 that way instead. (And delude themselves into thinking it will actually run great)

I think thats is only, because Microsoft is not prioritizing it. Well I think that the future is Linux kernel and monolithic Windows like user interface. It would have already happen if all of hundreds and thousand "distros" would work together.

There is only one and best kernel LINUX, after that is muddy water. And why so many peole use old Windows versions but not some Linux distro flavor, because we need consistency.

Well I a ging off topic, but I hope you understand what I mean. Programming and applications are ultra portable these days.

I am aroused about any X86 motherboard that has full functional ISA slot. I think i have problem. Not really into that original (Turbo) XT,286,386 and CGA/EGA stuff. So just a DOS nut.
PS. If I upload RAR, it is a 16-bit DOS RAR Version 2.50.

Reply 15 of 17, by Cyberdyne

User metadata
Rank Oldbie
Rank
Oldbie

And you know what is the future, if you do not play AAAA games in 16K or just run benchmarks(pun intended). The Amd Ryzen APU and no real need for a discreet GPU.

I am aroused about any X86 motherboard that has full functional ISA slot. I think i have problem. Not really into that original (Turbo) XT,286,386 and CGA/EGA stuff. So just a DOS nut.
PS. If I upload RAR, it is a 16-bit DOS RAR Version 2.50.

Reply 16 of 17, by NoMis

User metadata
Rank Newbie
Rank
Newbie
appiah4 wrote on 2020-12-09, 14:12:
There have been a lot of good articles and videos about why the Apple's computation power comparisons to contemporary x86-64 Ryz […]
Show full quote

There have been a lot of good articles and videos about why the Apple's computation power comparisons to contemporary x86-64 Ryzen CPUs is UTTER rubbish, if anyone actually wants to learn whether Apple's silicon is up to snuff. The short answer is: NO, it is not. All those so called single-threaded benchmarks they put out are basically their multi-core vs x86 single core. The Ryzen multi core performance will demolish Apple's shitty CPUs.

As for GPUs.. I'll just put this here:

tenor.gif?itemid=9628866

Cyberdyne wrote on 2020-12-09, 13:57:

Thing was that in 486 times much programming was platform specific. Now in reality, almost everything is portable. Hey they even have Windows 10 in ARM now 😁

Yes, it exists, and it's Sooooo good that people would rather virtualize an x86 environment and run Windows 10 that way instead. (And delude themselves into thinking it will actually run great)

Well, they specifically optimized the CPU cores to be very efficient in single threded scenarios and there is basically nothing wrong with that. A lot of consumer grade software is still innherently single threaded. I'm actually impressed how much performance Apple delivered here. Of course, when it comes to heavy multithreaded workloads another picture evolves. And that is even before zen 3 actually hit the laptop market.
Tiger lake also looks very good when it comes to single threaded workloads. Let alone multithreading.

As far as energy efficiency goes, we have to see how much of that comes from their cpu architecture or from the 5nm TSMC process. They certainly have the node advantage here. Zen 4 on 5nm will look very interessting. I also look forward to more from x86, regarding pairing of efficient cores with performance cores. Intel already dipped into that area with the likes of lakefield. I also hope, that Intel will finally get their act together regarding their manufacturing and acutally deliver Intel 7nm.

When it comes to desktop class hardware, the M1 has to proove, that it can scale up.

I can't say much regarding GPU performance, havent looked at that particular topic yet.

All in all, Apple made a great CPU. But it ceartainly is not, like some people proclaim, the end of x86. Far from it. As far as efficiency goes, they don't have any magic ingridient and are bound by physics so I don't expect them to magically deliver much more performance per watt as anyone else. Certainly not just because of the Architecture.

Reply 17 of 17, by Error 0x7CF

User metadata
Rank Member
Rank
Member

Even if Apple develops and releases the absolute fastest CPU on the market in every benchmark and can virtualize x86 as fast as concurent chips, x86 (Or, at least Windows) will still be the most popular OS. Apple can't (more likely won't?) price their products to dominate the market, they are happy being a boutique brand and having the fat margins that come with that.

It seems likely that ARM will scale out of the low performance and low power draw it's at now, but it probably won't be Apple dominating that market. Windows for ARM will likely be on top, running on Qualcomm etc etc silicon, because that's what will be in $200 walmart laptops, and will be what the average consumer will buy. The average person just won't buy a $1000 laptop, no matter how good it is. In contrast, a $200 laptop with ARM and running legacy windows programs is a killer preposition, especially because even with what is right now a middling-capacity laptop battery it would have great battery life. Apple needs to be able to dual-boot Windows properly (Windows on ARM with (fast!) emulated x86 for compatibility) on their laptops like their x86 models do now if they want to keep or grow their existing market share. It's entirely possible that Microsoft snubs them on this so they can't and Macs have to make do with some shaky emulation layer that won't convince people to switch to Mac hardware.

I'm sure you all know that x86 was not always speed king, high end RISCs were at the (expensive) top of the market for a very very long time, up until about the Athlons came out and could compete. A possible scenario is a return to that, though extremely optimistic about Apple Silicon's scalability, as well as ARM/RISC in general. The extreme high end might be ARM or other RISCs in the future, but the jury is out on that. It's possible that in 2025/2030 the high end of the market looks like 1995, with fast RISC workstations from a boutique vendor (Apple this time) at the extreme high-end, and x86 and low-end Apple Silicon (PowerPC stand-in) competing elsewhere, though in a few years it's going to look a lot better for ARM than it did for PowerPC, especially considering Microsoft is looking to hop over for real this time. Windows NT ran on RISCs before, but they started in the high end so it was doomed to never reach mass adoption.

It's very likely that in the very near future, ARM-running-Windows will have convincing control of the middle-low to low end of the overall computer market, and that x86 will be slugging it out in the middle and high end (of laptops). On the deskop, it's likely that low-end soldered ARM will creep into the bottom end of the market someday soon, the part of the market that is currently Atoms and the low-end Celerons and Pentiums. I don't see ARM winning in the higher desktop space unless somebody sockets it and it scales really well. An ARM win on the desktop also requires ARM domination in the laptop and server spaces first either way, to build up software compatibility. Server shouldn't be too hard, that's already underway because servers are so power-usage-sensitive and heavily multicore, so having a lot more, but weaker, cores is no problem.

It seems obvious that RISCs with fixed instruction length can scale to higher-issue per core than x86 because they don't, say, have to deal with converting wacky maximum-15-byte memory-unaligned CISC instructions to microOPs for execution. Whether that balances out that RISCs naturally do less per-instruction is the pinch. RISCs can also spend more silicon on performance-enhancing features because they don't have to deal with the aforementioned x86->uOp decoding.

Old precedes antique.