VOGONS

Common searches


AMD drops the mic

Topic actions

Reply 140 of 279, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:

Problem is, those Intel CPUs that are better at gaming are much cheaper than the Ryzen models introduced so far.
I say it's useless to compare those CPUs. We'll have to wait for the 6-core and 4-core Ryzen models to get a better idea of where the Ryzen stands when the prices are comparable.

I'm not arguing this point. If all you do is game, buy a four core i7 and be happy. If you want to cut your own home movies or perform other tasks which take advantage of high thread counts, you might like more muscle.

All hail the Great Capacitor Brand Finder

Reply 141 of 279, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

I'm not arguing this point. If all you do is game, buy a four core i7 and be happy. If you want to cut your own home movies or perform other tasks which take advantage of high thread counts, you might like more muscle.

That's obvious.
The relevant question for AMD is: how many people want the extra muscle (and are willing to pay the premium, because a 4-core CPU is still a lot cheaper), and how many of these people even know they want it, and if they do, would they actually go to AMD for it?
Besides, these 8-core CPUs aren't *that* much better at things like video encoding than the 4-core CPUs.
In fact, if you look at the Handbrake HEVC test here: http://www.anandtech.com/show/11170/the-amd-z … 00x-and-1700/20
That dang 7700K actually comes out on top.
In the H264 LQ test it's also virtually the same performance as the 1800X.
As I said, 'Moar coars is so 2011'. The 7700K is the CPU to beat, and AMD is having a hard time beating it.

The gamer market is much larger, and serves a bit of a 'halo' effect as well: I bet that many people who game only casually, will mostly see CPUs being benchmarked in gaming-related scenarios, because that just happens to be a big theme in reviews. So that more or less implies that the better gaming CPU is the better buy.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 142 of 279, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++

The final encode is a small part of video editing and production 😀

Also, what was the toolchain that compiled x265 for that Handbrake build? Michael Larabel found some suspicious test cases where toolchain optimizations might be required to extract consistent performance. He'll be doing some GCC and Clang performance comparisons to try and tease out an indication of what's going on.

All hail the Great Capacitor Brand Finder

Reply 143 of 279, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

The final encode is a small part of video editing and production 😀

Not in amount of CPU-load 😀
Editing is normally limited by the speed of the user's input (or possibly the IO speed of the device you're importing your media from), not the CPU.
Hence, it's not that relevant what the performance is there (which is why that part is never benchmarked).
But nice try.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 144 of 279, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++

Creatives tend to like it when their work isn't paced too much by the technology.

Many aspects are I/O limited. Certain complex compositing filters are quite CPU intensive, however.

All hail the Great Capacitor Brand Finder

Reply 145 of 279, by SPBHM

User metadata
Rank Oldbie
Rank
Oldbie

I'm fairly disappointed with how immature the platform looks, with poor bios quality, SMT giving performance penalty on gaming (the same doesn't happen with a 6900K),

overall I'm still quite positive about it because it's a big step from Bulldozer from a company which I no longer expected anything like it, it delivers some solid MT performance, SMT looks pretty strong for that,

low OC potential is also a clear negative...
still, I quite like the idea of the 1700 (non X), with a b350 board and some OC, incredible value for MT.

but hopefully bios quality will improve a lot until the R5/R3 is released, also hopefully newer revisions will overclock more.
because I also still quite like the idea of a quad core/hexa core ryzen with a cheap MB and some OC,

Reply 146 of 279, by Tetrium

User metadata
Rank l33t++
Rank
l33t++

I'd prefer Intel to actually have some good competition so I hope the AMD chips get up and running.

I'm not really bothered about the gaming aspects of both Intel and AMD CPUs as gaming isn't something that I do as much as I used to. But I do prefer a stable and relatively upgradeable platform so lets see how AM4 will fare the upcoming time 😀

I don't know why they made Bulldozer, it was a pretty lame design tbh.

I don't count the number of cores as it's the total performance that matters more to me.

Scali wrote:
Not really. It's not like you have more (active) threads just because you have multiple programs/tabs open. In fact, most browse […]
Show full quote
Kreshna Aryaguna Nurzaman wrote:

For (relatively) modern games, I don't have dedicated gaming rig. My main rig is 32-bit Windows 7, that works simultaneously as gaming rig, working rig, and audiophile PC. I mostly work with Office (mostly PowerPoint) while having probably a dozen of browser tabs, while listening to music on the said PC. As such, multi-threaded performance is important to me.

Not really.
It's not like you have more (active) threads just because you have multiple programs/tabs open. In fact, most browsers will put invisible tabs on idle.
That's just the multicore-myth right there.

Multiple programs do spread across several cores, right?
He wasn't talking about multiple tabs, he was talking about multiple programs, having lots of stuff in use at the same time (which is also important to me actually, more than it is to game).

KAN's requirements are very similar to mine it seems 😀

Whats missing in your collections?
My retro rigs (old topic)
Interesting Vogons threads (links to Vogonswiki)
Report spammers here!

Reply 147 of 279, by Scali

User metadata
Rank l33t
Rank
l33t
Tetrium wrote:

Multiple programs do spread across several cores, right?

Not necessarily.
That's my point.
Idle threads do not get scheduled at all.
If you run 100 programs at a time, but they're all just waiting for keyboard/mouse input, then none of them ever actually get put on a core.

It always surprises me that people seem to have forgotten that about a decade ago, we still had single-core CPUs, and back then we could also run many applications at a time, with little problems.
Heck, if you just started your OS, you'd already have tons of background services and threads, and you barely noticed. The main bottleneck was memory, not CPU.

So the answer is not "I run more programs, so I need more cores".
It's about what these programs do. Browser tabs generally don't actually 'do' much. They just consume memory and wait for user input (with the exception of things like audio/video streams, which continue running when you switch away from the tab, but you generally don't want more than one of them running at the same time).
I currently have 6 tabs open in my browser, and when I look at Task Manager, I see my browser at 0-1%. System is 95+% idle (I also have various other apps open, Outlook, Slack, Skype, developer IDE, and there's several development DBs running on this machine).

So the question is: how many non-idle threads do I need to run at a time? And how non-idle are they? Because even a video stream in a browser only needs a few % of a single core these days, generally less than 25%. So it's not like you immediately need a second core for a second video stream. The first one could multiplex multiple streams before it hits 100%.
I think you'll find remarkably few scenarios where an 8-core CPU really gives you a significant advantage in practice.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 148 of 279, by Tetrium

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
Not necessarily. That's my point. Idle threads do not get scheduled at all. […]
Show full quote
Tetrium wrote:

Multiple programs do spread across several cores, right?

Not necessarily.
That's my point.
Idle threads do not get scheduled at all.

I don't think idle threads would be a problem for a single-core anyway.

It always surprises me that people seem to have forgotten that about a decade ago, we still had single-core CPUs, and back then we could also run many applications at a time, with little problems.

Doesn't really surprise me. Not all people are as computer savvy as we are.

And the bottom line is that most sane people don't really care for the number of cores of the base frequency, all they care about is performance and what it costs to get it.

Whats missing in your collections?
My retro rigs (old topic)
Interesting Vogons threads (links to Vogonswiki)
Report spammers here!

Reply 149 of 279, by Scali

User metadata
Rank l33t
Rank
l33t
Tetrium wrote:

I don't think idle threads would be a problem for a single-core anyway.
...
Doesn't really surprise me. Not all people are as computer savvy as we are.

Just a moment ago you were arguing for "moar coars" for browser tabs.

Tetrium wrote:

And the bottom line is that most sane people don't really care for the number of cores of the base frequency, all they care about is performance and what it costs to get it.

My point is that there are remarkably few 'sane' people, if that is your definition.
Most people (including yourself) just think "More programs == more threads == more cores".
So AMD is actively marketing the "moar coars" thing.
I also read in many places "Sure, 7700k is better for gaming *now*, but just wait, in a few years all games will be optimized for multicore, and you want that 8-core CPU".
AMD is also marketing that thing, because they've announced that they'll be working with game developers to release patches to improve performance.
Now where have I heard all this before? Oh yes, Bulldozer.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 150 of 279, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++

One thing which is missing from the conversation, I feel, is an understanding of the real impact of performance.
What impact does sacrificing performance have?
What kinds of jobs need more performance? What can additional performance be used for?
What strategies can be used to gain that performance?

For example, achieving higher frame rates on Crysis is all well and good, but if your frame rates are high enough, what does it matter? It's why I like HardOCP's graphics card reviews. They try to evaluate the real benefits in playability and image quality of using or not using particular configurations over others in their gaming benchmarks.

All hail the Great Capacitor Brand Finder

Reply 151 of 279, by sunaiac

User metadata
Rank Oldbie
Rank
Oldbie
SPBHM wrote:

SMT giving performance penalty on gaming (the same doesn't happen with a 6900K),

That's at least partly a core parking problem, which is under windows 10 setup at 100% on Intel and wrongly at 10% on ryzen.
That's explained here : http://www.hardware.fr/articles/956-8/retour- … erformance.html

And the results are modified like that, which could be a preview of ryzen's performance with a patched windows 10 :
http://www.hardware.fr/getgraphimg.php?id=446&n=13
http://www.hardware.fr/getgraphimg.php?id=449&n=9

Now, there still is the problem of the drivers and games themselves. But if the IPC is at broadwell level in most non game applications, which depend on less "environmental" things, there's no reason it can't reach the same level in games. Obviously it can't happen in 2 weeks before the processor comes, so expecting ryzen to be at broadwell level the day it comes or dismissing the processors performance on those same tests is probably a bit premature.

Anyways we'll see how things change, but for now I think the 1700 is extremely interesting, and I'd rather have a 1700 + Freesync screen than a 6900k for more, as I'm pretty sure I can't tell the difference between 117 and 134 frames per second but I can tell the difference between tearing and no tearing.

R9 3900X/X470 Taichi/32GB 3600CL15/5700XT AE/Marantz PM7005
i7 980X/R9 290X/X-Fi titanium | FX-57/X1950XTX/Audigy 2ZS
Athlon 1000T Slot A/GeForce 3/AWE64G | K5 PR 200/ET6000/AWE32
Ppro 200 1M/Voodoo 3 2000/AWE 32 | iDX4 100/S3 864 VLB/SB16

Reply 152 of 279, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:
One thing which is missing from the conversation, I feel, is an understanding of the real impact of performance. What impact doe […]
Show full quote

One thing which is missing from the conversation, I feel, is an understanding of the real impact of performance.
What impact does sacrificing performance have?
What kinds of jobs need more performance? What can additional performance be used for?
What strategies can be used to gain that performance?

I think I already pointed that out earlier.
There is no single answer, because it depends a lot on what kind of mix of applications you use.
You can't even generalize it to "video editing" or "database servers" or whatever. Because it depends on what specific software you use (some scale better than others), and even what specific workloads you perform with that software.
So it's useless to debate this in detail. There are too many variables.
The only thing you can do, as I did before, is to just look at Task Manager to see what your CPU is actually doing, and draw your conclusions from that.

gdjacobs wrote:

For example, achieving higher frame rates on Crysis is all well and good, but if your frame rates are high enough, what does it matter? It's why I like HardOCP's graphics card reviews. They try to evaluate the real benefits in playability and image quality of using or not using particular configurations over others in their gaming benchmarks.

I think the bigger question is:
What are things going to look like, going forward?
Will you upgrade your GPU during the lifetime of this system? If so, will you still have 'high enough' frame rates, or would the new GPU require more oomph from the CPU to drive the extra detail and effects it is capable of?
And what about the games that come out in the future? Will they have the same CPU requirements as Crysis, or will they also want more? And if so, will they want more single-core performance (so should you have gotten that 7700k anyway?), or do they want moar coars?

The answer is: we don't know, and it depends who you ask.
I am of the opinion that we aren't going to make a lot of progress in multi-core scaling (we've been trying for decades now, it is what it is. There are already two popular consoles out with 8 cores, it's not a 'new' thing anymore), so single-core performance will remain important.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 153 of 279, by Carlos S. M.

User metadata
Rank Oldbie
Rank
Oldbie

Idk really if i should even consider Ryzen or stick with Intel, i don't do really much game, but i still do some demanding stuff, and sometimes needs good multithread and high single thread (something like a high end i5 or i7)
Ryzen looks promissing, but in Cinebnech, the R7 1800X only matches my OC'd i5 2500K in single thread when the R7 1700 and 1700X loses. As an upgrade, i initially looked for an i7 6700K or 7700K, the other issues is overclocking, Ryzen is really a poor overclocker, even worse than Broadwell CPUs, most samples hardly surpasses 4 ghz so it can be a dealbreaker to me, especially with Intel's 6C and 8C begin better overclockers, even the i7 6950X with it's 10 cores can OC better than Ryzen right now.

Right now i'm still looking for an i7 7700K rig or Coffee lake depending of the time/money/benches

What is your biggest Pentium 4 Collection?
Socket 423/478 Motherboards with Universal AGP Slot
Socket 478 Motherboards with PCI-E Slots
LGA 775 Motherboards with AGP Slots
Experiences and thoughts with Socket 423 systems

Reply 154 of 279, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
I think I already pointed that out earlier. There is no single answer, because it depends a lot on what kind of mix of applicati […]
Show full quote
gdjacobs wrote:
One thing which is missing from the conversation, I feel, is an understanding of the real impact of performance. What impact doe […]
Show full quote

One thing which is missing from the conversation, I feel, is an understanding of the real impact of performance.
What impact does sacrificing performance have?
What kinds of jobs need more performance? What can additional performance be used for?
What strategies can be used to gain that performance?

I think I already pointed that out earlier.
There is no single answer, because it depends a lot on what kind of mix of applications you use.
You can't even generalize it to "video editing" or "database servers" or whatever. Because it depends on what specific software you use (some scale better than others), and even what specific workloads you perform with that software.
So it's useless to debate this in detail. There are too many variables.
The only thing you can do, as I did before, is to just look at Task Manager to see what your CPU is actually doing, and draw your conclusions from that.

We've all kind of been dancing around the issue, but I think it's important that everyone consider the question explicitly.

Scali wrote:
I think the bigger question is: What are things going to look like, going forward? Will you upgrade your GPU during the lifetime […]
Show full quote
gdjacobs wrote:

For example, achieving higher frame rates on Crysis is all well and good, but if your frame rates are high enough, what does it matter? It's why I like HardOCP's graphics card reviews. They try to evaluate the real benefits in playability and image quality of using or not using particular configurations over others in their gaming benchmarks.

I think the bigger question is:
What are things going to look like, going forward?
Will you upgrade your GPU during the lifetime of this system? If so, will you still have 'high enough' frame rates, or would the new GPU require more oomph from the CPU to drive the extra detail and effects it is capable of?
And what about the games that come out in the future? Will they have the same CPU requirements as Crysis, or will they also want more? And if so, will they want more single-core performance (so should you have gotten that 7700k anyway?), or do they want moar coars?

The answer is: we don't know, and it depends who you ask.
I am of the opinion that we aren't going to make a lot of progress in multi-core scaling (we've been trying for decades now, it is what it is. There are already two popular consoles out with 8 cores, it's not a 'new' thing anymore), so single-core performance will remain important.

We might see Gustavson's Law at work as scene sizes and detail levels ramp up. Also, the tools available for concurrent programming in most popular languages are extremely bad.

All hail the Great Capacitor Brand Finder

Reply 155 of 279, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

Also, the tools available for concurrent programming in most popular languages are extremely bad.

Tools aren't going to help.
It's the algorithms. Certain algorithms just don't scale well with parallelism, and no alternatives that do scale have been found.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 156 of 279, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++

Tools certainly do help when we're talking about smaller titles that don't have huge engineering resources to throw at engine programming.

All hail the Great Capacitor Brand Finder

Reply 157 of 279, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

Tools certainly do help when we're talking about smaller titles that don't have huge engineering resources to throw at engine programming.

No they aren't. I hate to repeat myself but: algorithms.
If you don't have an algorithm that can be parallelized efficiently, there isn't a tool in the world that's gonna change that.
Besides, what do smaller titles have to do with it? We're talking about even the biggest titles from studios with the largest budgets, best engineers in the world, developing their own engines, who can't get their engines to scale.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 158 of 279, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++

I'm interested to know what workloads are proving most difficult for game programmers, but that wasn't what I was getting at.

For software that isn't as well financed (which is most titles on the market), the dev team is obviously going to pick the low hanging fruit when it comes to software engineering and optimization. Amdahl's Law at work.

Top studios obviously do have more resources to maximize multhreading and push the concurrent fraction to the limit for what good it does them, but more cores do mean having the opportunity to add more features with little impact on wall clock time so long as they don't contribute significantly to the sequential fraction. That's Gustavson's Law at work.

All hail the Great Capacitor Brand Finder

Reply 159 of 279, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

For software that isn't as well financed (which is most titles on the market), the dev team is obviously going to pick the low hanging fruit when it comes to software engineering and optimization.

Not really, they'll just license engines and libraries from a third party that already did the engineering and optimization.

gdjacobs wrote:

Amdahl's Law at work.

That has nothing to do with Amdahl's Law. Amdahl's Law explains why scalability of algorithms depends on both the parallel and serial performance components, ergo, just adding more parallel resources will eventually make you entirely limited by the serial performance component.
It has nothing to do with how well engineers may or may not have optimized a particular piece of code. It is at a more abstract, fundamental level of information technology, in the realm of algorithmic complexity. There is just a hard limit to how far you can parallelize/optimize a certain algorithm.

gdjacobs wrote:

Top studios obviously do have more resources to maximize multhreading and push the concurrent fraction to the limit for what good it does them, but more cores do mean having the opportunity to add more features with little impact on wall clock time so long as they don't contribute significantly to the sequential fraction. That's Gustavson's Law at work.

Yes, but what features are you going to just 'bolt on' to a game? Most things in a game are interactive, and therefore dependent on other processes. This is where Amdahl comes in again. User input is inherently serial, as is the output to a GPU (frames need to be rendered in the correct order, and the render operations/passes need to be performed in the right order as well. You need to do your z-pass before you do your deferred shading pass. You need to do your shadow pass before you do your light pass. You need to complete your image buffer before you can post-process it, etc. In some cases even the order of individual objects or polygons need to be in the correct order (eg back-to-front or front-to-back)). At its most basic form, any kind of animation is a sequence of images. In theory you could parallelize it by rendering all frames in a game at the same time, but in practice that would be meaningless. You need interaction between each frame and the user input.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/