VOGONS


Was the P4 architecture a dead end?

Topic actions

First post, by xjas

User metadata
Rank l33t
Rank
l33t

Maybe I'm interpreting this wrong... I just read the modern Core CPU is a (distant) relative of the Pentium-M, which is based on the old P6 (Pentium III/II/Pro/etc.)

So what happened to the Pentium-4 (P7?) Did it just vanish into history, or evolve into anything else? Obviously they "back"-ported some of its features (e.g. SSE2) into the P6... but I still remember the truly massive amount of hype around it when it was launched, so it's strange to see it obsoleted by its own predecessor.

twitch.tv/oldskooljay - playing the obscure, forgotten & weird - most Tuesdays & Thursdays @ 6:30 PM PDT. Bonus streams elsewhen!

Reply 1 of 81, by Gamecollector

User metadata
Rank Oldbie
Rank
Oldbie

Long conveyor + high clocks - enormous TDP?
Intel have found the wall at 3.4 GHz. And have choosed another way.

Asus P4P800 SE/Pentium4 3.2E/2 Gb DDR400B,
Radeon HD3850 Agp (Sapphire), Catalyst 14.4 (XpProSp3).
Voodoo2 12 MB SLI, Win2k drivers 1.02.00 (XpProSp3).

Reply 2 of 81, by kanecvr

User metadata
Rank Oldbie
Rank
Oldbie

From what I understand, the Netburst architecture was inefficient due to it's long staged pipelines witch, while it allowed for high clock rates caused severe branch miss prediction - up to 33% more than a Pentium III running at the same frequency. You can find more details on the Netburst architecture here https://en.wikipedia.org/wiki/NetBurst_ ... hitecture) - but long story short, Netburst was inefficient compared to its predecessor and competitor, and no amount of re-engineering could solve the problem.

As far as I can figure the Nehalem architecture (successor of the Core architecture) on witch the first i7 and i5 cpus are based on borrows from both the P6 line (wider core / multiple parallel pipelines) as well as the P7 architecture - longer pipeline, 20-24 stages for Nehalem vs 12-14 for the Peryn/Yonah processors. While the pipeline on the Nehalem still doesn't have as many stages as the Prescott P7 (31 stage pipeline), it nearly doubles that of the Peryn cores - so you could say the Nehalem is a compromise between the P6 and the P7 architectures.

Intel seems to have gone a different route with current core CPUs. Ivy bridge and Hanswell CPUs have a wider and shorter pipeline - only 14-19 stages, witch more closely resembles the Pentium M core's 12-14 stage pipeline. This approach seems to make CPUs more power-efficient while maintaining or even increasing performance, and in this iteration (Hanswell) allows for high clock frequencies due to new technologies such as 3D tri-gate transistors and the 22nm manufacturing process.

One architecture (P6) seems to favor parallelism, while the other (P7) favors higher clock frequencies. It seems that modern intel CPUs, particularly the Hanswell 4th generation Core processors reverted back to short / wide pipelines.

Compared to the original Nehalem i7, the Hanswell has shorter / wider pipelines, but maintains other architectural elements, such as hyper-threading, 64kb L1 cache, 256kb L2 cache and the presence of shared L3 cache, intel QPI, integrated memory, pci-e and DMI controller, so I guess things haven't changed significantly since 2007 😀. To be fair, Hanswell has a "wider" core then it's predecessors. Four ALUs, a second branch prediction unit, a third address generation unit as well as deeper buffers and an improved memory controller. Floating point performance is also greatly increased in Hanswell over previous generations, alltough I can't seem to be able to figure out why. In synthetic FPU (Julia, VP8) benchmarks alone, the 47W i7 4710hq powering my laptop scores higher then a desktop 77W i7 3770k I've been playing with. This is impressive considering that the 4710hq turbos up to 3.4GHz while the 3770k runs at 3.4Ghz and goes as high as 3.9GHz with enough thermal headroom. By this alone, FPU Julia tests should favor the 3770, but they don't.

Broadwell and Skylake microarchitectures seem to be (as far as I can understand) a die shrink of Hanswell plus the addition of new instructions and DDR4 support. Skylake-U is also rumored to feature L4 cache.

So I guess the Netburst architecture wasn't a total waste. It brought high clocks, hyper threading and L3 cache, features that are still in use today.

Reply 3 of 81, by Anonymous Coward

User metadata
Rank l33t
Rank
l33t

Prior to the mid-90s, the typical computer buyer was usually educated and rich/upper middle class. When PCs cost $2000+ they were out of range for the average household. After 1995 or so, prices started dropping significantly making them mainstream consumer products. Even before the P4 was officially introduced, everyone knew that it was a shitty architecture. Intel purposely choose a design that allowed them to ramp up clock frequency (even while performance suffered)...because the average Joe only paid attention to "MHz" when buying a new PC. Needless to say, the Netbust scam worked pretty well...until they got to the point where your PC almost melted down from the insane TDP, then they just switched back to the good old P6 core, which was conveniently still being developed for use in portable devices.

Probably you can dig up some pretty scathing articles on theregister.co.uk if you're really interested.

And yes...I am the resident P4 hater. 😎

"Will the highways on the internets become more few?" -Gee Dubya
V'Ger XT|Upgraded AT|Ultimate 386|Super VL/EISA 486|SMP VL/EISA Pentium

Reply 4 of 81, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
kanecvr wrote:
From what I understand, the Netburst architecture was inefficient due to it's long staged pipelines witch, while it allowed for […]
Show full quote

From what I understand, the Netburst architecture was inefficient due to it's long staged pipelines witch, while it allowed for high clock rates caused severe branch miss prediction - up to 33% more than a Pentium III running at the same frequency. You can find more details on the Netburst architecture here https://en.wikipedia.org/wiki/NetBurst_ ... hitecture) - but long story short, Netburst was inefficient compared to its predecessor and competitor, and no amount of re-engineering could solve the problem.

As far as I can figure the Nehalem architecture (successor of the Core architecture) on witch the first i7 and i5 cpus are based on borrows from both the P6 line (wider core / multiple parallel pipelines) as well as the P7 architecture - longer pipeline, 20-24 stages for Nehalem vs 12-14 for the Peryn/Yonah processors. While the pipeline on the Nehalem still doesn't have as many stages as the Prescott P7 (31 stage pipeline), it nearly doubles that of the Peryn cores - so you could say the Nehalem is a compromise between the P6 and the P7 architectures.

Intel seems to have gone a different route with current core CPUs. Ivy bridge and Hanswell CPUs have a wider and shorter pipeline - only 14-19 stages, witch more closely resembles the Pentium M core's 12-14 stage pipeline. This approach seems to make CPUs more power-efficient while maintaining or even increasing performance, and in this iteration (Hanswell) allows for high clock frequencies due to new technologies such as 3D tri-gate transistors and the 22nm manufacturing process.

One architecture (P6) seems to favor parallelism, while the other (P7) favors higher clock frequencies. It seems that modern intel CPUs, particularly the Hanswell 4th generation Core processors reverted back to short / wide pipelines.

Compared to the original Nehalem i7, the Hanswell has shorter / wider pipelines, but maintains other architectural elements, such as hyper-threading, 64kb L1 cache, 256kb L2 cache and the presence of shared L3 cache, intel QPI, integrated memory, pci-e and DMI controller, so I guess things haven't changed significantly since 2007 😀. To be fair, Hanswell has a "wider" core then it's predecessors. Four ALUs, a second branch prediction unit, a third address generation unit as well as deeper buffers and an improved memory controller. Floating point performance is also greatly increased in Hanswell over previous generations, alltough I can't seem to be able to figure out why. In synthetic FPU (Julia, VP8) benchmarks alone, the 47W i7 4710hq powering my laptop scores higher then a desktop 77W i7 3770k I've been playing with. This is impressive considering that the 4710hq turbos up to 3.4GHz while the 3770k runs at 3.4Ghz and goes as high as 3.9GHz with enough thermal headroom. By this alone, FPU Julia tests should favor the 3770, but they don't.

Broadwell and Skylake microarchitectures seem to be (as far as I can understand) a die shrink of Hanswell plus the addition of new instructions and DDR4 support. Skylake-U is also rumored to feature L4 cache.

So I guess the Netburst architecture wasn't a total waste. It brought high clocks, hyper threading and L3 cache, features that are still in use today.

The issue with Netburst's deep pipeline was not the rate of branch miss-prediction, it was the high penalty associated with closing the defunct branch. Intel put a lot of engineering know how into improving the branch prediction unit to reduce this problem, but the issue persisted. As a consequence, however, Core 2 and beyond had very robust branch prediction units.

This is also the reason why SMT made some sense with Netburst. If one branch failed, there would potentially be instructions on the second thread waiting in the wings to keep the pipeline busy.

All hail the Great Capacitor Brand Finder

Reply 5 of 81, by dr_st

User metadata
Rank l33t
Rank
l33t
Anonymous Coward wrote:

Prior to the mid-90s, the typical computer buyer was usually educated and rich/upper middle class. When PCs cost $2000+ they were out of range for the average household. After 1995 or so, prices started dropping significantly making them mainstream consumer products. Even before the P4 was officially introduced, everyone knew that it was a shitty architecture. Intel purposely choose a design that allowed them to ramp up clock frequency (even while performance suffered)...because the average Joe only paid attention to "MHz" when buying a new PC. Needless to say, the Netbust scam worked pretty well...until they got to the point where your PC almost melted down from the insane TDP, then they just switched back to the good old P6 core, which was conveniently still being developed for use in portable devices.

I must say that the version of the story I know differs somewhat from yours. For starts, I never saw any facts backing up the 'everyone knew' or 'scam' claims. You are of course exaggerating about PCs that 'almost melted'. Some Core 2 Extreme CPUs have comparable and even higher TDPs, and I don't recall any of them melting either. 😀

It is true, of course, than in terms of performance-per-clock, and performance-per-watt, the P4 (Netburst) architecture is atrocious. It was a dead-end architecture, where short-term gains (higher clocks) soon hit the wall of maximum reasonable clock and power. Whether it was a case of marketing-driven-engineering, or simply bad engineering - I am not sure, and having some idea how big organizations such as Intel work, it's probably a bit of both.

However, certainly I have never heard of any evidence that it was a deliberate plot of deceit. This was simply the best they had at the time. PM was not ready, and P3 was not enough.

Say all the bad things you want about P4s, but they had their place, and they had their moment in time when they were the top-of-the-line. And, if you disregard things like clock frequency and power usage, in terms of raw performance of the last of the single-core CPUs, a high-end P4 still wins against a high-end PM, and often even a high-end K8 (Athlon 64).

https://cloakedthargoid.wordpress.com/ - Random content on hardware, software, games and toys

Reply 6 of 81, by alexanrs

User metadata
Rank l33t
Rank
l33t

I believe Intel designed the Pentium 4 to achieve huge clock speeds to balance its poor IPC. Problem is, they hit a wall sooner than they expected, and P4's intended successor was even worse (higher TDP than Prescott). Once they hit those walls, abandoning Netburst was inevitable.
About P3... I wonder how would a Tualatin have performed if they gave it the Netburst's improved branch prediction and quad-pumped FSB, and then lauched THAT as the Pentium 4 on socket 423.

Reply 7 of 81, by oerk

User metadata
Rank Oldbie
Rank
Oldbie
alexanrs wrote:

About P3... I wonder how would a Tualatin have performed if they gave it the Netburst's improved branch prediction and quad-pumped FSB, and then lauched THAT as the Pentium 4 on socket 423.

Well... the Pentium M _is_ pretty much what you're describing here. No idea what would've happened if it was launched as the P4, but you're asking how it would've performed - see Pentium 4 😀

Reply 8 of 81, by alexanrs

User metadata
Rank l33t
Rank
l33t

The Pentium Ms were mobile chips, therefore restrained to lower multipliers and a 27W TDP (lower than desktop Tualatins). A desktop version of the Pentium M would've been allowed to clock higher/perform better. The fastest Pentium M was also restrained to a 533MT/s FSB, whereas there are plenty of Pentium 4 (Northwood+) that use 800MT/s FSB, and two models of the Extreme Edition can go up to 1066MT/s. Also, I'd like to know how the rehashed Tually would've fared against the Willamete it could've replaced. I believe the biggest gains would have been in the lower end of the market, as a P6-based Celeron wouldn't have taken as much of a hit from the reduced cache size.

Reply 9 of 81, by Anonymous Coward

User metadata
Rank l33t
Rank
l33t

It is true, of course, than in terms of performance-per-clock, and performance-per-watt, the P4 (Netburst) architecture is atrocious. It was a dead-end architecture, where short-term gains (higher clocks) soon hit the wall of maximum reasonable clock and power. Whether it was a case of marketing-driven-engineering, or simply bad engineering - I am not sure, and having some idea how big organizations such as Intel work, it's probably a bit of both.

Dead-end architecture = planned obsolescence = scam (you don't actually think they didn't know, right?)
Market driven-engineering = scam (my sister is a marketing major, and she is DEFINITELY full of shit!)
Big organisation = scam (especially when you're a government lobbying monopoly like Intel)

I would not give Intel the benefit of the doubt, especially considering they have a history of scamming (see 486 Overdrive Socket, i487).
AMD is a company run by scammy lawyers. I have no love for them either.
Cyrix was a pretty interesting company, but they did scammy stuff too (remember PR ratings and the rigged benchmarks? I think PR must have actually stood for "public relations")

I admit that pretty much every company engages in scam at some point, especially if they are publicly traded one, but Intel was particularly brazen with the netburst/rambus scam and they really pissed off a lot of people at the time.

Now for my biased PIII rant.

The PIII had plenty of life left in it, and this was demonstrated by the fact that Tualatin existed. Intel only produced it up to 1.4GHz, but it has been shown to be stable up to 1.6GHz. I'm sure it could have been scaled further had Intel continued to die shrink it (which they obviously would not do, because it would have made P4 look like a giant turd). Intel purposely overpriced these chips (mostly sold as server grade parts), and crippled their desktop platforms to make them unappealing to the masses. (i815 was mostly inferior to the BX, i820 used horribly expensive RAMBUST technology [don't even get me started on those assholes], and the SDRAM converted i820 was a dog). BX could have easily been updated with official 133MHz FSB support (DDR would have been nice too). Most consumers didn't even know Tualatins existed, but I did and I bought one (and was very happy with it). Basically from day one Athlon kicked some serious P4 ass (except in the Intel rigged benchmarks). I am an Intel man (mostly for the platform), but even I knew AMD had the upper hand. Everyone I knew went Athlon, and I was the sole remaining Intel guy. It was pretty embarrassing how much market share Intel lost to AMD. AMD wisely invested in DEC technology and caught Intel with their pants down. Thankfully Rambust, Itanic and Netbust all went the way of the dodo, and we now we are blessed with core (P6's revenge).

Unfortunately Intel's marketing department still continues to confuse us to this day with convoluted model names, which make it hard to avoid power pigs. I guess with the "extreme" line of chips that should be somewhat obvious though. For use in a retro gaming system (preferably underclocked and cooled with liquid nitrogen and a leaf blower if possible), I'm sure P4 isn't a bad choice since people will basically pay you to take them off their hands. As for me, I will continue to avoid them so I don't have to relive the pain and suffering endured from 2000-2005 or so. I love a lot of shitty Intel chips...like the P60 and the 80286...but there is no place in my heart for the P4.

"Will the highways on the internets become more few?" -Gee Dubya
V'Ger XT|Upgraded AT|Ultimate 386|Super VL/EISA 486|SMP VL/EISA Pentium

Reply 10 of 81, by Putas

User metadata
Rank Oldbie
Rank
Oldbie
Anonymous Coward wrote:

Dead-end architecture = planned obsolescence = scam (you don't actually think they didn't know, right?)

Unlikely, AMD was gaining momentum even before Pentium 4 release. I really think it was genuine mistake along with the push for rambus. That could never sell. If you want to go obsolete why invest in development of new architecture? It is costly.
And regarding i815, I believe back then all chipsets with integrated graphics suffered some penalty, even if it was not used.

Reply 11 of 81, by kanecvr

User metadata
Rank Oldbie
Rank
Oldbie
dr_st wrote:

if you disregard things like clock frequency and power usage, in terms of raw performance of the last of the single-core CPUs, a high-end P4 still wins against a high-end PM, and often even a high-end K8 (Athlon 64).

The only benchmarks Netburst had an edge in were netburst / SSE3 optimized ones, and those fell trough as well when the Athlon64 showed up. Simple example - take two CPUs: Athlon 64 3800+ (single core, venice, 2,4GHz, 512kb L2 cache) vs Pentium 4 620 (Single core, Cedar Mill, hyper threading, 2.8GHz, 2MB L2 cache) - the Athlon is marginally faster in all but a few benchmarks. It leads most FPU and game related benchmarks except for FPU Julia where the P4 slightly etches ahead due to higher clockspeed.

Still, I personally consider the Cedar Mill netburst iteration pretty decent. The're pretty fast and if you take a clock vs rating approach, they are almost on par with competing athlons.

Last edited by kanecvr on 2015-11-06, 22:13. Edited 1 time in total.

Reply 12 of 81, by dr_st

User metadata
Rank l33t
Rank
l33t
Anonymous Coward wrote:

Dead-end architecture = planned obsolescence = scam (you don't actually think they didn't know, right?)
Market driven-engineering = scam (my sister is a marketing major, and she is DEFINITELY full of shit!)
Big organisation = scam (especially when you're a government lobbying monopoly like Intel)

Oh, so why didn't you just start your post saying that you are a conspiracy theory nut, and save us the hassle of reading it? 😀

https://cloakedthargoid.wordpress.com/ - Random content on hardware, software, games and toys

Reply 13 of 81, by Standard Def Steve

User metadata
Rank Oldbie
Rank
Oldbie
dr_st wrote:

Say all the bad things you want about P4s, but they had their place, and they had their moment in time when they were the top-of-the-line. And, if you disregard things like clock frequency and power usage, in terms of raw performance of the last of the single-core CPUs, a high-end P4 still wins against a high-end PM, and often even a high-end K8 (Athlon 64).

Perhaps in a handful of multimedia type applications. But as an owner of all three CPUs: P4 520 @ 3.73GHz (equivalent to the fastest single core P4 EE), Athlon 64 3700 @ 2.8GHz (equivalent to the FX-57), and Pentium M @ 2.7GHz, I can say that K8 and PM are easily faster than the P4 in gaming applications.

94 MHz NEC VR4300 | SGI Reality CoPro | 8MB RDRAM | Each game gets its own SSD - nooice!

Reply 14 of 81, by dr_st

User metadata
Rank l33t
Rank
l33t

Where did you find a Pentium M @ 2.7GHz? Do you mean perhaps the Pentium M @ 2.27GHz?

I am not going to argue with your experience, because obviously, YMMV, but my experience (comparing a 3.0GHz P4-HT Northwood with a 1.8GHz P-M Dothan, both of which are mid-high range of their kind) has been somewhat different. In trivial single-threaded benchmarks they show very similar results. In practical multi-purpose use (multimedia, web browsing, office work), the P4 gets a very clear edge, possibly due to the hyper-threading.

I have not testing them specifically in games, and such testing would be difficult, because of the dependency on other parts of the eco-system, mostly the GPU, which tends to be very different between desktops and laptops.

When it comes to comparing to the K8, then, yes I agree, that in most cases, the K8 has the advantage. I should probably have said "sometimes" instead of "often" in my first post.

https://cloakedthargoid.wordpress.com/ - Random content on hardware, software, games and toys

Reply 15 of 81, by Gamecollector

User metadata
Rank Oldbie
Rank
Oldbie
dr_st wrote:

which tends to be very different between desktops and laptops.

The standard way to test P4 versus P-M is the ASUS CT-479 adapter. You will lose SpeedStep but will get dual-channel PC2700 RAM and all compatible PCI/AGP add-on cards.

Asus P4P800 SE/Pentium4 3.2E/2 Gb DDR400B,
Radeon HD3850 Agp (Sapphire), Catalyst 14.4 (XpProSp3).
Voodoo2 12 MB SLI, Win2k drivers 1.02.00 (XpProSp3).

Reply 16 of 81, by kanecvr

User metadata
Rank Oldbie
Rank
Oldbie

Here are some Pentium 4 VS Pentium M benchmarks using said CT-479 adapter - http://techreport.com/review/8585/asus-ct-479 … ocket-adapter/5

It seems a 2.2GHz pentium M performs similarly to a 3.4GHz P4.

Really wish I had one of those adapters and a suitable motherboard for it.

Reply 17 of 81, by alexanrs

User metadata
Rank l33t
Rank
l33t
Anonymous Coward wrote:

The PIII had plenty of life left in it, and this was demonstrated by the fact that Tualatin existed. Intel only produced it up to 1.4GHz, but it has been shown to be stable up to 1.6GHz. I'm sure it could have been scaled further had Intel continued to die shrink it (which they obviously would not do, because it would have made P4 look like a giant turd).

I wouldn't go as far as saying the P3 still had plenty of life left. It was limited by its FSB - and it could not take full advantage of RDRAM or DDR-RAM. It was just not viable to keep pushing the Pentium 3 against the Athlon-threat. Or even the Duron threat. As clocks would go higher, the gap was bound to increase as the K7 had more room to breathe with its double pumped FSB. As inefficient as it was, Netburst at least succeeded in delivering enough performance to compete, even if VRMs combusted while doing so.

Anonymous Coward wrote:

Intel purposely overpriced these chips (mostly sold as server grade parts), and crippled their desktop platforms to make them unappealing to the masses. (i815 was mostly inferior to the BX, i820 used horribly expensive RAMBUST technology [don't even get me started on those assholes], and the SDRAM converted i820 was a dog). BX could have easily been updated with official 133MHz FSB support (DDR would have been nice too). Most consumers didn't even know Tualatins existed, but I did and I bought one (and was very happy with it). Basically from day one Athlon kicked some serious P4 ass (except in the Intel rigged benchmarks). I am an Intel man (mostly for the platform), but even I knew AMD had the upper hand. Everyone I knew went Athlon, and I was the sole remaining Intel guy. It was pretty embarrassing how much market share Intel lost to AMD. AMD wisely invested in DEC technology and caught Intel with their pants down. Thankfully Rambust, Itanic and Netbust all went the way of the dodo, and we now we are blessed with core (P6's revenge).

I wouldn't fault planned obsolence for the i820, though, as the P4 itself was stuck with RDRAM for a while. Intel and Rambus were together in a weird quest to push the turd RDRAM was. It seems fitting, though, as RDRAM was, in a lot of ways, simillar to the Pentium 4 - hot and inefficient, with its huge numbers, but atrocious latency.

Reply 18 of 81, by kanecvr

User metadata
Rank Oldbie
Rank
Oldbie
alexanrs wrote:
I wouldn't go as far as saying the P3 still had plenty of life left. It was limited by its FSB - and it could not take full adva […]
Show full quote
Anonymous Coward wrote:

The PIII had plenty of life left in it, and this was demonstrated by the fact that Tualatin existed. Intel only produced it up to 1.4GHz, but it has been shown to be stable up to 1.6GHz. I'm sure it could have been scaled further had Intel continued to die shrink it (which they obviously would not do, because it would have made P4 look like a giant turd).

I wouldn't go as far as saying the P3 still had plenty of life left. It was limited by its FSB - and it could not take full advantage of RDRAM or DDR-RAM. It was just not viable to keep pushing the Pentium 3 against the Athlon-threat. Or even the Duron threat. As clocks would go higher, the gap was bound to increase as the K7 had more room to breathe with its double pumped FSB. As inefficient as it was, Netburst at least succeeded in delivering enough performance to compete, even if VRMs combusted while doing so.

Anonymous Coward wrote:

Intel purposely overpriced these chips (mostly sold as server grade parts), and crippled their desktop platforms to make them unappealing to the masses. (i815 was mostly inferior to the BX, i820 used horribly expensive RAMBUST technology [don't even get me started on those assholes], and the SDRAM converted i820 was a dog). BX could have easily been updated with official 133MHz FSB support (DDR would have been nice too). Most consumers didn't even know Tualatins existed, but I did and I bought one (and was very happy with it). Basically from day one Athlon kicked some serious P4 ass (except in the Intel rigged benchmarks). I am an Intel man (mostly for the platform), but even I knew AMD had the upper hand. Everyone I knew went Athlon, and I was the sole remaining Intel guy. It was pretty embarrassing how much market share Intel lost to AMD. AMD wisely invested in DEC technology and caught Intel with their pants down. Thankfully Rambust, Itanic and Netbust all went the way of the dodo, and we now we are blessed with core (P6's revenge).

I wouldn't fault planned obsolence for the i820, though, as the P4 itself was stuck with RDRAM for a while. Intel and Rambus were together in a weird quest to push the turd RDRAM was. It seems fitting, though, as RDRAM was, in a lot of ways, simillar to the Pentium 4 - hot and inefficient, with its huge numbers, but atrocious latency.

They quad-pumped the FSB on the pentium M just as they did on the pentium 4, so that wasn't an issue. Besides, modern intel core CPUs (Sandy Bridge, Hanswell, etc) use 100MHz FSB and Quick Path Interconnect, so FSB isn't a performance determining factor anymore.

Reply 19 of 81, by Standard Def Steve

User metadata
Rank Oldbie
Rank
Oldbie
dr_st wrote:
Where did you find a Pentium M @ 2.7GHz? Do you mean perhaps the Pentium M @ 2.27GHz? […]
Show full quote

Where did you find a Pentium M @ 2.7GHz? Do you mean perhaps the Pentium M @ 2.27GHz?

I am not going to argue with your experience, because obviously, YMMV, but my experience (comparing a 3.0GHz P4-HT Northwood with a 1.8GHz P-M Dothan, both of which are mid-high range of their kind) has been somewhat different. In trivial single-threaded benchmarks they show very similar results. In practical multi-purpose use (multimedia, web browsing, office work), the P4 gets a very clear edge, possibly due to the hyper-threading.

I have not testing them specifically in games, and such testing would be difficult, because of the dependency on other parts of the eco-system, mostly the GPU, which tends to be very different between desktops and laptops.

When it comes to comparing to the K8, then, yes I agree, that in most cases, the K8 has the advantage. I should probably have said "sometimes" instead of "often" in my first post.

It's actually a PM 755 (2GHz) overclocked to 2.7GHz on an MSI Speedster i915 socket 479 desktop board. This board is even better than using the CT-479 adapter on a regular 478 board because it supports PCI-E and dual-channel DDR2-533. At 2.7GHz, the PM runs neck and neck with the K8 at 2.8GHz. I haven't tested the PM at lower clock speeds, but based on its gaming chops at 2.7GHz, I think that at 2.27GHz it would have no problem matching the 3.73GHz P4 in most games. I've only tested older games from 2000-2006. The P4 would probably be able to even things out or even outperform the PM in newer multi-threaded games.

94 MHz NEC VR4300 | SGI Reality CoPro | 8MB RDRAM | Each game gets its own SSD - nooice!