Reply 160 of 279, by gdjacobs
- Rank
- l33t++
wrote:wrote:Amdahl's Law at work.
That has nothing to do with Amdahl's Law. Amdahl's Law explains why scalability of algorithms depends on both the parallel and serial performance components, ergo, just adding more parallel resources will eventually make you entirely limited by the serial performance component.
It has nothing to do with how well engineers may or may not have optimized a particular piece of code. It is at a more abstract, fundamental level of information technology, in the realm of algorithmic complexity. There is just a hard limit to how far you can parallelize/optimize a certain algorithm.
Amdahl's Law presents that hard performance limit. It's a reasonable extrapolation that approaching that limit will be a process of diminishing returns and at some point devs are going to bow to pressure from their production team, take their performance wins in hand and call it a day.
wrote:wrote:Top studios obviously do have more resources to maximize multhreading and push the concurrent fraction to the limit for what good it does them, but more cores do mean having the opportunity to add more features with little impact on wall clock time so long as they don't contribute significantly to the sequential fraction. That's Gustafson's Law at work.
Yes, but what features are you going to just 'bolt on' to a game? Most things in a game are interactive, and therefore dependent on other processes. This is where Amdahl comes in again. User input is inherently serial, as is the output to a GPU. At its most basic form, any kind of animation is a sequence of images. In theory you could parallelize it by rendering all frames in a game at the same time, but in practice that would be meaningless. You need interaction between each frame and the user input.
Well, in game physics models for one. Never bank against a developer finding ways to use additional compute resources.
All hail the Great Capacitor Brand Finder