Reply 180 of 192, by roytam1
my Athlon x2 3250e with JetWay JNC81-LF is still acting as internet gateway server for my home now.
my Athlon x2 3250e with JetWay JNC81-LF is still acting as internet gateway server for my home now.
Testbench 13:
- Gigabyte GA-MA770-UD3 v2.0
- Athlon X2 7850 BE 2.8Ghz AD785ZWCJ2BGH (Kuma, stepping B3, released in 2009)
- Asus GeForce GTX 480 (factory clocks, 701 GPU, 924 memory, 1401 shader, released in 2010). NVidia driver 197.41
- 4x 2GB DDR2 800 unganged running at 4 4 4 12
Athlon X2 7750 BE was OCed to simulate 7850 BE. I used 14x multiplier, stock CPU voltage. The results should be identical to a real 7850 BE. Later, a real 7850 BE will be used to simulate 2.9-3.0Ghz version. We couldn't run 4x 2GB sticks at 1066 similarly to Phenom X4 9950 BE 2.6Ghz. Only 2 memory sticks can run at that speed but we need 8GB RAM for Vista era.
3d mark 2006 breakdown, 1024x768:
3d mark 2006 breakdown, 1600x900:
3d mark 2006 breakdown, 1600x1200:
Athlon X2 7850 BE 2.8Ghz falls in-between Athlon 64 X2 6000+ (Windsor) and Athlon 64 X2 6400+ (Windsor) as expected.
Games tested:
Windows XP
- F.E.A.R. (2005) - in 1600x1200 with max settings we get 141 fps average in built-in benchmark, 3 fps less than Athlon 64 X2 6400+ (Windsor).
- World in Conflict (2007) - in 1600x1200 we then get 31 average fps in built-in benchmark with the best visual quality settings.
- Crysis (2007) - in 1600x1200 average fps in benchmark is 45 without full screen anti aliasing. Everything else was set to max. When 4x anti aliasing is enabled, we get 43 average fps.
- Far Cry 2 (2008) - with max settings, in 1600x1200 we get 43 fps in the built-in "Demo Ranch" benchmark and 24 fps in the "Action Scene" benchmark. Not enough for enjoyable experience.
- Colin McRae: Dirt 2 (2009) - with ultra settings, in 1600x1200 we get 59 average fps in the built-in benchmark.
Windows 7
- Crysis (2007) - in dx10 mode in 1600x1200 we get 31 average fps with very high quality and 4x anti aliasing in built-in benchmark. In dx9 mode with high details and 4x anti aliasing we get 45 fps.
- Grand Theft Auto IV (2008) - in 1600x1200 we get 34 average fps in the built-in benchmark with very high quality, view distance 50, detail distance 60, vehicle density 51 (double values from defaults).
- Warhammer 40,000: Dawn of War II (2009) - with ultra settings, in 1600x1200 we get 38 average fps in the built-in benchmark.
- Batman: Arkham City (2011) - in 1600x1200 we get average 41 fps with very high quality and FXAA high anti-aliasing. Physics was off as it could not be enabled with the old driver 197.41.
Conclusion about Athlon X2 7850 BE 2.8Ghz with GeForce GTX 480 and DDR2 800:
- doesn't match Athlon 64 X2 6400+ (Windsor) performance wise in Windows XP games but beats Athlon 64 X2 6000+ (Windsor). Phenom based X2 CPUs fail to beat the best Windsor CPU.
- it is a good choice for Windows XP era games (2002-2006)
- mediocre coverage of Windows Vista era (2007-2009) due to slow CPU, noticeable in games from 2008
- noticably slower than Phenom X4 9950 BE 2.6Ghz in Windows Vista games such as GTA 4, Far Cry 2, Colin McRae: Dirt 2. We really need 4 AMD cores for later games.
- in hindsight, Phenom X4 9950 BE 2.6Ghz is a decent choice from tested CPUs to cover both Windows XP and Windows Vista era except for Crysis. It covers Vista era much better in general.
Next steps:
- test Athlon X2 7850 BE OCed to 2.9-3.0Ghz. It is expected to match Athlon 64 X2 6400+ (Windsor) performance wise. We would like to see how high the clocks would have to go to match Windsor.
- although there is also Phenom II X2 with clocks high enough to beat Windsor, it will be better served in an AM3 board and testing has shown 2 cores are inadequate for some Windows Vista games
Pentium III 900E, ECS P6BXT-A+, 384MB RAM, GeForce FX 5600 128MB, Voodoo 2 12MB, Yamaha SM718 ISA
Athlon 64 3400+, Gigabyte GA-K8NE, 2GB RAM, GeForce GTX 260 896MB, Sound Blaster Audigy 2 ZS
Phenom II X6 1100, Asus 990FX, 32GB RAM, GeForce GTX 980 Ti
Phenom I REALLY needed that 6MB of L3 (it sadly wasn't possible, due to SRAM cell size at 65nm).
Phenom II x4 940 should be a good bit faster than Phenom 9950, especially in games.
The expectation is Athlon X2 at 3Ghz (Phenom based) may be enough to match Windsor, but it's too little as that is still only Windows XP coverage. It's interesting mainly for comparison with Phenom II X4 at the same clock. Phenom II is really needed in AM2+ with its higher clocks and larger L3 cache. Around 3Ghz could be enough. Phenom II X4 940 BE should be a good choice (is also a true AM2+ CPU), if not available a 945 / 955 BE (AM3) could be used as a replacement. Phenom II X4 975 / 980 in AM2+ would kind of look like 486 DX4 100 in an ISA only motherboard.
Phenom X4 9950 isn't that bad in later games that utilize multiple threads, but does poorly in Crysis. The regression compared to Windsor is too big to accept it.
We are kind of pushing AM2+ to the limits. When I was buying boards, most people seemed to have either a 2.2-2.3Ghz Phenom X4 or Athlon X2 in them. The top tested configurations here almost never existed.
Pentium III 900E, ECS P6BXT-A+, 384MB RAM, GeForce FX 5600 128MB, Voodoo 2 12MB, Yamaha SM718 ISA
Athlon 64 3400+, Gigabyte GA-K8NE, 2GB RAM, GeForce GTX 260 896MB, Sound Blaster Audigy 2 ZS
Phenom II X6 1100, Asus 990FX, 32GB RAM, GeForce GTX 980 Ti
AlexZ wrote on Yesterday, 07:46:We are kind of pushing AM2+ to the limits. When I was buying boards, most people seemed to have either a 2.2-2.3Ghz Phenom X4 or Athlon X2 in them. The top tested configurations here almost never existed.
Yeah, and that's why such comparison does not necessarily represent how things were back then. Like if someone was building a system with a limited budget (which it always is) the results of LGA775 vs AM2/AM2+ comparison could be totally different.
I did find 6000+ windsor while dumpster diving digging through stuff to be discarded at work though.
And that phenom2 i've tested was the one i've used back then, though being 720BE it was not really "top" before unlocking/overclocking. This one definitely was very cost-effective back then... it is a shame AMD decided to make unlocking impossible on newer CPUs - with how bad FX was it could have given them at least some reason to exist...
Archer57 wrote on Yesterday, 08:42:And that phenom2 i've tested was the one i've used back then, though being 720BE it was not really "top" before unlocking/overclocking. This one definitely was very cost-effective back then... it is a shame AMD decided to make unlocking impossible on newer CPUs - with how bad FX was it could have given them at least some reason to exist...
The FX chip added the turbo core features (yes select Phenom II's had it too). With turbo core giving you an auto overclocking capability, why would you, as a designer, give an additional manual overclocking ability? What margin is left is there to exploit? Besides maybe on very select carefully tested dies, that exhibit excellent capabilities that you can label a "overclock" version or "black edition", and considering FX being rough around the edges, I would see them say turbo core is what people really want. And since they know best on how to get performance out of the chip, why allow the great risk now of actually damaging the chip (overclocking in current advanced nodes makes me really concerned! The models they use show they won't have much life in them then...) So better they say, let's make using whatever margin there is easy and not have to deal with silly returns because they cooked their chip or it is not a good overclocker. Yes that takes the fun away. And yes, even though there is some merit to just having auto overclock for the average person, it does kind of make the FX look bad, because it poorly performed at the time when it had it.
the3dfxdude wrote on Yesterday, 13:41:The FX chip added the turbo core features (yes select Phenom II's had it too). With turbo core giving you an auto overclocking capability, why would you, as a designer, give an additional manual overclocking ability? What margin is left is there to exploit?Besides maybe on very select carefully tested dies, that exhibit excellent capabilities that you can label a "overclock" version or "black edition", and considering FX being rough around the edges, I would see them say turbo core is what people really want. And since they know best on how to get performance out of the chip, why allow the great risk now of actually damaging the chip (overclocking in current advanced nodes makes me really concerned! The models they use show they won't have much life in them then...) So better they say, let's make using whatever margin there is easy and not have to deal with silly returns because they cooked their chip or it is not a good overclocker. Yes that takes the fun away. And yes, even though there is some merit to just having auto overclock for the average person, it does kind of make the FX look bad, because it poorly performed at the time when it had it.
Looks like there is a misunderstanding between a temporary turbo, where the CPU can exceed TDP only for a short period of time, under certain workload conditions and long term overclocking where a higher TDP is acceptable. Entirely different things. I wouldn't make any claims on behalf of other people. One of major benefits of Phenom II was core unlocking, which is totally unrelated to turbo/overclocking.
My Phenom II X4 1100T has the first turbo function but I don't see it working frequently despite it being enabled in BIOS. What is more important to me is that BIOS allows me to configure both the base and boost frequency. Since the CPU can boost up to 4.7Ghz, I could easily set the base to 4.5-4.6Ghz if I needed as long as I have adequate cooling.
I also bought an FX-8370 (that will be replacing Phenom II X4 1100T) which allows me to set the base and boost frequency in BIOS as well. Noctua NH-D15 will be used to cool it and if needed, I could set the base frequency to 4.2-4.3Ghz and deal with higher TDP.
Pentium III 900E, ECS P6BXT-A+, 384MB RAM, GeForce FX 5600 128MB, Voodoo 2 12MB, Yamaha SM718 ISA
Athlon 64 3400+, Gigabyte GA-K8NE, 2GB RAM, GeForce GTX 260 896MB, Sound Blaster Audigy 2 ZS
Phenom II X6 1100, Asus 990FX, 32GB RAM, GeForce GTX 980 Ti
Before I parted ways with most of my post-Pentium III, pre-AVX2 stuff a couple of years ago, I had an X6 1090T in my collection. I believe that was one of the DDR3/AM3-only Phenoms. You know, I really liked that machine! I ran mine at 4.05 GHz. At that speed, it easily outperformed a stock E8600 even in single-threaded loads, but my gosh did it run red-hot. That machine put out more heat than my current Ryzen 5950X PBO'd to 5.05/4.65 GHz!
Unfortunately, Phenom really hasn't aged well, even compared to the FX lineup. I mean, the lack of SSE 4.1 is kind of understandable for an architecture that originally launched in 2007, but no SSSE3? C'mon now!
Anyway. Just a bit of an anecdote that doesn't really add much to your thread! 😀
"A little sign-in here, a touch of WiFi there..."
AlexZ wrote on Yesterday, 19:43:Looks like there is a misunderstanding between a temporary turbo, where the CPU can exceed TDP only for a short period of time, under certain workload conditions and long term overclocking where a higher TDP is acceptable. Entirely different things. I wouldn't make any claims on behalf of other people.
You can certainly run an elevated base frequency and deal with higher TDP as much as you want on your chips. The designers check their chips for a reason to have an expected lifespan for their highly clocked chips. I simulated this stuff and I've seen the numbers constraining the max frequency, so this is first hand accounts here. The max frequency numbers for that generation are really low for the high frequencies they tried to push and then they added the turbo, which is why I mention it, left essentially no room at all if you stayed in spec and deploy the expected use case, and if you trust this vendor and their silicon. Overclocking is overclocking. They do have product tiers they want to meet a demand for, if you want to play the lottery on the quality of the silicon, sure try it out. Given what I've seen, I don't trust these companies didn't bend the rules going to tape out and binning them.
I do like the Phenom II, but still, it isn't like there aren't still flaws.
the3dfxdude wrote on Yesterday, 13:41:The FX chip added the turbo core features (yes select Phenom II's had it too). With turbo core giving you an auto overclocking capability, why would you, as a designer, give an additional manual overclocking ability? What margin is left is there to exploit? Besides maybe on very select carefully tested dies, that exhibit excellent capabilities that you can label a "overclock" version or "black edition", and considering FX being rough around the edges, I would see them say turbo core is what people really want. And since they know best on how to get performance out of the chip, why allow the great risk now of actually damaging the chip (overclocking in current advanced nodes makes me really concerned! The models they use show they won't have much life in them then...) So better they say, let's make using whatever margin there is easy and not have to deal with silly returns because they cooked their chip or it is not a good overclocker. Yes that takes the fun away. And yes, even though there is some merit to just having auto overclock for the average person, it does kind of make the FX look bad, because it poorly performed at the time when it had it.
Honestly i do not like whole "turbo" thing at all, either modern hardware or this old stuff with first implementations. I mean we already have power saving where frequency is reduced when idle, why do we need to go both ways, why not just set the frequency to max it can handle + power limit?
It makes modern hardware weird, where it can be overclocked by downvolting (less power = more boost within power limit) etc.
On phenoms? I'd much rather disable turbo, raise multiplier (so all cores) and leave power saving on. This way it'll be more predictable and as long as cooling can handle it performance will be better.
However OC was not really what i was talking about initially - i was talking about unlocking cores. On phenoms it was possible. On FX they made extra effort to prevent it. I get it - they want to sell more expensive CPUs, but IMO with how bad the FX was at least leaving this lottery available would have made them some extra sales...
Longevity? My phenom has been working daily for 15 years +/-, unlocked and overclocked, at higher temperatures... and it is still fine. I'd say it practically outlived its useful life. I get all the talks about modern hardware with smaller transistors and longevity concerns, but i'll only be worried about it if we see actual failures in large quantities. So far - nothing but some unrelated issues like AMD killing stuff by overvolting it out of the box...
Turbo wouldn't be bad if it had capability to enable that extra frequency in cpu heavy scenes temporarily and then switch to base thus running always just for a few seconds and reducing lows. But it isn't that intelligent and basically useless for games. It makes performance more unpredictable. Perhaps useful only for single threaded productivity apps.
Power saving features greatly reduce TDP/noise in idle mode making OC more viable. You get more noise only when playing games and low TDP otherwise. With large coolers the TDP is not a problem.
Pentium III 900E, ECS P6BXT-A+, 384MB RAM, GeForce FX 5600 128MB, Voodoo 2 12MB, Yamaha SM718 ISA
Athlon 64 3400+, Gigabyte GA-K8NE, 2GB RAM, GeForce GTX 260 896MB, Sound Blaster Audigy 2 ZS
Phenom II X6 1100, Asus 990FX, 32GB RAM, GeForce GTX 980 Ti
Testbench 14:
- Gigabyte GA-MA770-UD3 v2.0
- OCed Athlon X2 7850 BE to 3.0Ghz AD785ZWCJ2BGH (Kuma, stepping B3, released in 2009)
- Asus GeForce GTX 480 (factory clocks, 701 GPU, 924 memory, 1401 shader, released in 2010). NVidia driver 197.41
- 4x 2GB DDR2 800 unganged running at 4 4 4 12
I used 15x multiplier, stock CPU voltage.
3d mark 2006 breakdown, 1024x768:
3d mark 2006 breakdown, 1600x900:
3d mark 2006 breakdown, 1600x1200:
OCed Athlon X2 7850 BE at 3.0Ghz slightly outperforms Athlon 64 X2 6400+ (Windsor).
Games tested:
Windows XP
- F.E.A.R. (2005) - in 1600x1200 with max settings we get 145 fps average in built-in benchmark, 1 fps more than Athlon 64 X2 6400+ (Windsor).
- World in Conflict (2007) - in 1600x1200 we then get 32 average fps in built-in benchmark with the best visual quality settings, which is 1 fps less than Athlon 64 X2 6400+ (Windsor).
- Crysis (2007) - in 1600x1200 average fps in benchmark is 46 without full screen anti aliasing. Everything else was set to max. When 4x anti aliasing is enabled, we get 45 average fps. 4 fps less than Athlon 64 X2 6400+ (Windsor).
- Far Cry 2 (2008) - with max settings, in 1600x1200 we get 44 fps in the built-in "Demo Ranch" benchmark and 25 fps in the "Action Scene" benchmark. Not enough for enjoyable experience.
- Colin McRae: Dirt 2 (2009) - with ultra settings, in 1600x1200 we get 61 average fps in the built-in benchmark.
- Tom Clancy's H.A.W.X. 2 (2010) - with max settings, anti aliasing set to 4x, in 1600x1200 we get 82 average fps in the built-in benchmark
Windows 7
- Crysis (2007) - in dx10 mode in 1600x1200 we get 31 average fps with very high quality and 4x anti aliasing in built-in benchmark. In dx9 mode with high details and 4x anti aliasing we get 47 fps.
- Grand Theft Auto IV (2008) - in 1600x1200 we get 35 average fps in the built-in benchmark with very high quality, view distance 50, detail distance 60, vehicle density 51 (double values from defaults).
- Warhammer 40,000: Dawn of War II (2009) - with ultra settings, in 1600x1200 we get 40 average fps in the built-in benchmark.
- Batman: Arkham City (2011) - in 1600x1200 we get average 43 fps with very high quality and FXAA high anti-aliasing. Physics was off as it could not be enabled with the old driver 197.41.
Conclusion about OCed Athlon X2 7850 BE at 3.0Ghz with GeForce GTX 480 and DDR2 800:
- about equal to Athlon 64 X2 6400+ (Windsor) performance wise except for Crysis
- it is a good choice for Windows XP era games (2002-2006)
- mediocre coverage of Windows Vista era (2007-2009) due to slow CPU, noticeable in games from 2008
- noticably slower than Phenom X4 9950 BE 2.6Ghz in Windows Vista/7 games such as GTA 4, Far Cry 2, Colin McRae: Dirt 2, Tom Clancy's H.A.W.X. 2, Batman: Arkham City. We really need 4 AMD cores for later games.
- in hindsight, Phenom X4 9950 BE 2.6Ghz is a decent choice from tested CPUs to cover both Windows XP and Windows Vista era except for Crysis. It covers Vista era much better in general.
Next steps:
- test Phenom II X4 940 BE 3.0Ghz
Pentium III 900E, ECS P6BXT-A+, 384MB RAM, GeForce FX 5600 128MB, Voodoo 2 12MB, Yamaha SM718 ISA
Athlon 64 3400+, Gigabyte GA-K8NE, 2GB RAM, GeForce GTX 260 896MB, Sound Blaster Audigy 2 ZS
Phenom II X6 1100, Asus 990FX, 32GB RAM, GeForce GTX 980 Ti
Archer57 wrote on Today, 01:08:Honestly i do not like whole "turbo" thing at all, either modern hardware or this old stuff with first implementations. I mean we already have power saving where frequency is reduced when idle, why do we need to go both ways, why not just set the frequency to max it can handle + power limit?
It makes modern hardware weird, where it can be overclocked by downvolting (less power = more boost within power limit) etc.
Yes, I agree, that turbo, in a technical sense is lame. But when you are speak about wanting to unlock cores in FX like in the previous gen, even that, turbo added into the design affected the handling and access into each core. They started locking that down because they had to place limits on how many cores you can actually use when engaging turbo. The thing is, when they tried to get turbo implemented fully in FX, they also redesigned the core architecture to add more features they were missing, and add threading to use more of the "idle" parts of the chip. It was still too hot. That is why they went with the 2 core x 2 thread scheme, 4x4 etc, and only engaging turbo when using only half, and running slower with both. And even ended up loosing actual cores compared to the previous gen. The speeds weren't even that great. They just couldn't get the higher frequencies in FX without cooking the chip, and therefore why there were complaints and Phenom II being a more performant chip. I'm not sure if financially FX helped to have keep the company afloat or simply hurt them. It was a long time for them to get Ryzen out. I would guess it helped keep them afloat.
Generally, I think the auto overclocking feature shows the weakness of the industry in being able to get more speed out of silicon anymore. So they are pushing the limits of what anyone can do and calling it faster now.
Archer57 wrote on Today, 01:08:On phenoms? I'd much rather disable turbo, raise multiplier (so all cores) and leave power saving on. This way it'll be more predictable and as long as cooling can handle it performance will be better.
I like the phenoms. It is a more traditional design, and easier and more predictable like you say. But they want to make the next gen "faster" and so now the chip design is to be your parent and do overclocking for you.