VOGONS

Common searches


AMD drops the mic

Topic actions

Reply 160 of 279, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
gdjacobs wrote:

Amdahl's Law at work.

That has nothing to do with Amdahl's Law. Amdahl's Law explains why scalability of algorithms depends on both the parallel and serial performance components, ergo, just adding more parallel resources will eventually make you entirely limited by the serial performance component.
It has nothing to do with how well engineers may or may not have optimized a particular piece of code. It is at a more abstract, fundamental level of information technology, in the realm of algorithmic complexity. There is just a hard limit to how far you can parallelize/optimize a certain algorithm.

Amdahl's Law presents that hard performance limit. It's a reasonable extrapolation that approaching that limit will be a process of diminishing returns and at some point devs are going to bow to pressure from their production team, take their performance wins in hand and call it a day.

Scali wrote:
gdjacobs wrote:

Top studios obviously do have more resources to maximize multhreading and push the concurrent fraction to the limit for what good it does them, but more cores do mean having the opportunity to add more features with little impact on wall clock time so long as they don't contribute significantly to the sequential fraction. That's Gustafson's Law at work.

Yes, but what features are you going to just 'bolt on' to a game? Most things in a game are interactive, and therefore dependent on other processes. This is where Amdahl comes in again. User input is inherently serial, as is the output to a GPU. At its most basic form, any kind of animation is a sequence of images. In theory you could parallelize it by rendering all frames in a game at the same time, but in practice that would be meaningless. You need interaction between each frame and the user input.

Well, in game physics models for one. Never bank against a developer finding ways to use additional compute resources.

All hail the Great Capacitor Brand Finder

Reply 161 of 279, by Kreshna Aryaguna Nurzaman

User metadata
Rank l33t
Rank
l33t
Tetrium wrote:
I'd prefer Intel to actually have some good competition so I hope the AMD chips get up and running. […]
Show full quote

I'd prefer Intel to actually have some good competition so I hope the AMD chips get up and running.

I'm not really bothered about the gaming aspects of both Intel and AMD CPUs as gaming isn't something that I do as much as I used to. But I do prefer a stable and relatively upgradeable platform so lets see how AM4 will fare the upcoming time 😀

I don't know why they made Bulldozer, it was a pretty lame design tbh.

I don't count the number of cores as it's the total performance that matters more to me.

Scali wrote:
Not really. It's not like you have more (active) threads just because you have multiple programs/tabs open. In fact, most browse […]
Show full quote
Kreshna Aryaguna Nurzaman wrote:

For (relatively) modern games, I don't have dedicated gaming rig. My main rig is 32-bit Windows 7, that works simultaneously as gaming rig, working rig, and audiophile PC. I mostly work with Office (mostly PowerPoint) while having probably a dozen of browser tabs, while listening to music on the said PC. As such, multi-threaded performance is important to me.

Not really.
It's not like you have more (active) threads just because you have multiple programs/tabs open. In fact, most browsers will put invisible tabs on idle.
That's just the multicore-myth right there.

Multiple programs do spread across several cores, right?
He wasn't talking about multiple tabs, he was talking about multiple programs, having lots of stuff in use at the same time (which is also important to me actually, more than it is to game).

KAN's requirements are very similar to mine it seems 😀

gdjacobs wrote:

I'm interested to know what workloads are proving most difficult for game programmers, but that wasn't what I was getting at.

For software that isn't as well financed (which is most titles on the market), the dev team is obviously going to pick the low hanging fruit when it comes to software engineering and optimization. Amdahl's Law at work.

Top studios obviously do have more resources to maximize multhreading and push the concurrent fraction to the limit for what good it does them, but more cores do mean having the opportunity to add more features with little impact on wall clock time so long as they don't contribute significantly to the sequential fraction. That's Gustavson's Law at work.

Tetrium, gdjacobs, I'd rather not concern myself with Scali's fastidious, meticulous posts on multithreading, because whatever he said --even if it's correct-- is irrelevant to my buying consideration. Yes, opening a lot of browser tabs and desktop applications while copy-pasting between them , with foobar playing in the background, might not enjoy much benefit from Ryzen's excellent multithreaded performance. But fact remains that Ryzen has better price/performance ratio than Intel's offering in non-gaming applications, as Tech Report has shown.

value-productivity.png
Source: Tech Report.

Intel is still better for gaming, no doubt about it. But I'd rather spend my money on what matters more: GPU. Instead of buying expensive Intel CPU, I'd rather buy cheap AMD CPU, enjoy nice price/performance ratio in non-gaming applications, and add the extra money to my GPU budget. After all, the Tech Report's gaming benchmarks were all performed at 1080p, and as Tech Report has put it, "gaming at higher resolutions will lessen the differences in performance between Ryzen chips and Intel's seventh-generation Core CPUs if a gamer chooses to play that way."

To me, Intel CPU has become something like exotic amplifiers or ultra-expensive audio cables. Yes, they may make two or three percent better sonic performance, but I'd rather use "good enough" amplifier and cables, and put the extra money where the difference is the most audible: loudspeakers, that is.

Also, I heartily agree with you, Tetrium. I too, would love to see the AMD chips get up and running. I love to see good competition, since it benefits me as consumer.

Never thought this thread would be that long, but now, for something different.....
Kreshna Aryaguna Nurzaman.

Reply 162 of 279, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

Amdahl's Law presents that hard performance limit. It's a reasonable extrapolation that approaching that limit will be a process of diminishing returns and at some point devs are going to bow to pressure from their production team, take their performance wins in hand and call it a day.

I see no such relation.
Some algorithms are very easy to parallelize/optimize. Others are really hard. This difficulty has no relation to how well they may or may not scale based on Amdahl's law.
Thing is, optimizations change algorithms, and therefore they need to be reevaluated according to Amdahl's Law.
Amdahl's Law only gives you the hard limit for a specific implementation.

gdjacobs wrote:

Well, in game physics models for one. Never bank against a developer finding ways to use additional compute resources.

You're joking, I suppose?
Game physics do not only have dependencies on user input, but also make each physics object dependent on every other physics object. Physics simulations are by nature iterative solutions, and iterative solutions are sequential by default. Each iteration depends on the one before.
Physics is one of those things in modern games that has trouble scaling to many cores.
The PPU was an interesting processor, because it was based on network packet switching technology. It could efficiently forward results from one core to the next, allowing you to quickly solve these dependencies and iterations because you could distribute your workload efficiently.
GPGPUs can do this, but only part of the way. CPUs have too much overhead.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 163 of 279, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie

'Concurrent' programming tools (I'm assuming this means threading libraries?) have been pretty decent from the off?

Look at something like OpenMP (1998ish) which makes parallelisation of number crunching routines a breeze through nothing more than C pre-processor directives (but you have to design your routine/algorithm in such a way though, generally mutually exclusive processes tend not to bother each other so lend themselves well to this form of optimisation... that is until you synchronise your shared memory pool 😵, no behaviour of this type do you get for free though). For threading services/daemons etc, personally never had a problem with the standard threading libraries provided by most languages o.0. Note two different uses of 'Multi-threading' (for an single number crunching routine, and a concurrent running 'process').

While there have been advancements in compiler optimisations, I can't see the compiler parallelising stuff for you unless you explicitly ask it (race conditions!)...stuff like lambda's, (not capture lambda's, in c++) may provide some context in regards to this type of execution, but there are still unknowns, and potentially un-safe calls to shared memory. Its down to the thread which manages the shared data to block other parallel processes/threads from accessing it (mutex!), or not (BOOM!).

If there was something to tell the compiler that a process can be executed independently of the next sequential instructions, and also tell the compiler about the TLS and shared pool, the compiler can then *might* be able to compile accordingly. This is essentially what threading libraries do, but these tools cannot design your code for you.

'Multi-threading' is such a loosely used word. It does not directly equate to the number of processors (even 'multi-processing' can mean something different from the point of view of an OS). Control-Alt-Delete and look at the number of 'Threads' running. My laptop certainly doesn't have 2490 cores... no wait 2463... o.0

Problem with physics, is its difficult to fake without going full-blown FEA on it (or some form of FEA within an accelerated structure like a kdtree or something), and a lot of FEA is not mutually exclusive in a lot of cases. So while it can be broken down into digestible packets, not very efficiently without making sacrifices in precision.

Think of a jenga stack a 100 levels high. Remove the bottom one and this has an affect on the top layer, but not directly, it has to process the interaction between all the other ones. The top objects being dependant on all the other objects and that dependency is what throws a spanner in the works for efficient parallelisation. The reaction/action of one is not mutually exclusive of the other. Of course this all depends on the level of scale that you want to simulate rigid (or non rigid) body interaction. i.e in a game do you really give a shit about the position of a grain of sand 😀

[EDIT:] typos, sepollos & grammos

Last edited by spiroyster on 2017-03-10, 13:20. Edited 1 time in total.

Reply 164 of 279, by Tetrium

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
Just a moment ago you were arguing for "moar coars" for browser tabs. […]
Show full quote
Tetrium wrote:

I don't think idle threads would be a problem for a single-core anyway.
...
Doesn't really surprise me. Not all people are as computer savvy as we are.

Just a moment ago you were arguing for "moar coars" for browser tabs.

Tetrium wrote:

And the bottom line is that most sane people don't really care for the number of cores of the base frequency, all they care about is performance and what it costs to get it.

My point is that there are remarkably few 'sane' people, if that is your definition.
Most people (including yourself) just think "More programs == more threads == more cores".
So AMD is actively marketing the "moar coars" thing.

Seems you misinterpreted my message, but it happens.

I'm actually more computer savvy and more sane compared to most people who work with their computers so I think I will be fine 😀

Kreshna Aryaguna Nurzaman wrote:
Tetrium, gdjacobs, I'd rather not concern myself with Scali's fastidious, meticulous posts on multithreading, because whatever h […]
Show full quote

Tetrium, gdjacobs, I'd rather not concern myself with Scali's fastidious, meticulous posts on multithreading, because whatever he said --even if it's correct-- is irrelevant to my buying consideration. Yes, opening a lot of browser tabs and desktop applications while copy-pasting between them , with foobar playing in the background, might not enjoy much benefit from Ryzen's excellent multithreaded performance. But fact remains that Ryzen has better price/performance ratio than Intel's offering in non-gaming applications, as Tech Report has shown.

value-productivity.png
Source: Tech Report.

Intel is still better for gaming, no doubt about it. But I'd rather spend my money on what matters more: GPU. Instead of buying expensive Intel CPU, I'd rather buy cheap AMD CPU, enjoy nice price/performance ratio in non-gaming applications, and add the extra money to my GPU budget. After all, the Tech Report's gaming benchmarks were all performed at 1080p, and as Tech Report has put it, "gaming at higher resolutions will lessen the differences in performance between Ryzen chips and Intel's seventh-generation Core CPUs if a gamer chooses to play that way."

To me, Intel CPU has become something like exotic amplifiers or ultra-expensive audio cables. Yes, they may make two or three percent better sonic performance, but I'd rather use "good enough" amplifier and cables, and put the extra money where the difference is the most audible: loudspeakers, that is.

Also, I heartily agree with you, Tetrium. I too, would love to see the AMD chips get up and running. I love to see good competition, since it benefits me as consumer.

Yes I know that what Scali said was irrelevant. I've always wondered why he was such an Intel fanboy...but lets be honest, being a fanboy is not a crime anyway 😀

And what's good for us consumers is good for us 😁

I'm pretty sure Intel is better at gaming, but I'm much more of a multitasker myself.

Music, several browsers with loads of tabs, text editing windows with lots of tabs, multiple tools for editing game files, total commander/XVI32, messenger programs, starting the odd game to see if my modifications are working out...heck I'm always short on memory 😵

My Phenom II is still doing fine though, but not for the newest games which I kinda never play these days anyway. I stick to (slightly) older games, not in the least because those are cheaper and arguably more fun to play 😀

Whats missing in your collections?
My retro rigs (old topic)
Interesting Vogons threads (links to Vogonswiki)
Report spammers here!

Reply 165 of 279, by Scali

User metadata
Rank l33t
Rank
l33t
Tetrium wrote:

Yes I know that what Scali said was irrelevant. I've always wondered why he was such an Intel fanboy...

Since when is explaining things like Amdahl's law fanboyism?
Not to mention that 🤣 all x86 is crap.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 166 of 279, by Tetrium

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:

🤣 all x86 is crap.

I don't agree with you here, I think x86 is actually pretty decent and it does work 😀

But lets wait out what will happen next. Even Intel made some crap decisions in the past so lets hope AMD won't, for us consumer's sake 😀

Whats missing in your collections?
My retro rigs (old topic)
Interesting Vogons threads (links to Vogonswiki)
Report spammers here!

Reply 167 of 279, by Scali

User metadata
Rank l33t
Rank
l33t
Tetrium wrote:

I don't agree with you here, I think x86 is actually pretty decent and it does work 😀

Only shows what you know I suppose.

Tetrium wrote:

But lets wait out what will happen next. Even Intel made some crap decisions in the past so lets hope AMD won't, for us consumer's sake 😀

We already know what happens next.
AMD made another CPU that can't match Intel's IPC and performance/watt.
They're still competing with 8-core CPUs against 4-cores. Which has its disadvantages, as discussed already.
The 6-cores will have a better chance. Which probably brings us back to about the same situation as Core2 Quad vs Phenom II X6.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 168 of 279, by Carlos S. M.

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
Only shows what you know I suppose. […]
Show full quote
Tetrium wrote:

I don't agree with you here, I think x86 is actually pretty decent and it does work 😀

Only shows what you know I suppose.

Tetrium wrote:

But lets wait out what will happen next. Even Intel made some crap decisions in the past so lets hope AMD won't, for us consumer's sake 😀

We already know what happens next.
AMD made another CPU that can't match Intel's IPC and performance/watt.
They're still competing with 8-core CPUs against 4-cores. Which has its disadvantages, as discussed already.
The 6-cores will have a better chance. Which probably brings us back to about the same situation as Core2 Quad vs Phenom II X6.

tbh, AMD Ryzen's at launch is much better than Bulldozer at launch, despite Ryzen's shorcommings, Ryzen was a really big IPC increase over last gen unlike Bulldozer which was literally a downgrade over K10/Phenom II

What is your biggest Pentium 4 Collection?
Socket 423/478 Motherboards with Universal AGP Slot
Socket 478 Motherboards with PCI-E Slots
LGA 775 Motherboards with AGP Slots
Experiences and thoughts with Socket 423 systems

Reply 169 of 279, by Tetrium

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:

Only shows what you know I suppose.

Maybe I'm just easier to satisfy 😀

Scali wrote:
We already know what happens next. AMD made another CPU that can't match Intel's IPC and performance/watt. They're still competi […]
Show full quote

We already know what happens next.
AMD made another CPU that can't match Intel's IPC and performance/watt.
They're still competing with 8-core CPUs against 4-cores. Which has its disadvantages, as discussed already.
The 6-cores will have a better chance. Which probably brings us back to about the same situation as Core2 Quad vs Phenom II X6.

Except that this time the Core2 Quad and Phenom II X6 are not several years apart 😀
AMD definitely is the underdog here, I don't think there is much debate about that 😀
And let's be realistic here, Intel should have plenty muscle left to flex 😀

But for us consumers it's better to have Intel flex their muscle then it is for us to flex our muscle having to pay for overpriced components, so let's let Intel do some more work for a little while 😀

Whats missing in your collections?
My retro rigs (old topic)
Interesting Vogons threads (links to Vogonswiki)
Report spammers here!

Reply 170 of 279, by Tetrium

User metadata
Rank l33t++
Rank
l33t++
Carlos S. M. wrote:
Scali wrote:
Only shows what you know I suppose. […]
Show full quote
Tetrium wrote:

I don't agree with you here, I think x86 is actually pretty decent and it does work 😀

Only shows what you know I suppose.

Tetrium wrote:

But lets wait out what will happen next. Even Intel made some crap decisions in the past so lets hope AMD won't, for us consumer's sake 😀

We already know what happens next.
AMD made another CPU that can't match Intel's IPC and performance/watt.
They're still competing with 8-core CPUs against 4-cores. Which has its disadvantages, as discussed already.
The 6-cores will have a better chance. Which probably brings us back to about the same situation as Core2 Quad vs Phenom II X6.

tbh, AMD Ryzen's at launch is much better than Bulldozer at launch, despite Ryzen's shorcommings, Ryzen was a really big IPC increase over last gen unlike Bulldozer which was literally a downgrade over K10/Phenom II

Frankly, I never really understood what all the hype was about when Bulldozer appeared. It really seemed like a downgrade that I kinda wanted to avoid as it didn't really look very good (it actually seemed like quite a poor idea and poor design and it made me reminisce a bit about Intel's Netburst and how it was supposed to eventually do 10GHz).

I do like the naming scheme AMD uses for its CPU sockets (AM2, AM2+, AM3, AM3+, AM4), it's easier to recognize and thus easier to find second hand as unaware sellers tend to simply write down stuff that's on the PCB somewhere and...oh well everyone here probably knows what I'm talking about 🤣

Whats missing in your collections?
My retro rigs (old topic)
Interesting Vogons threads (links to Vogonswiki)
Report spammers here!

Reply 171 of 279, by ODwilly

User metadata
Rank l33t
Rank
l33t

I love how cheap am3 stuff is right now.

Main pc: Asus ROG 17. R9 5900HX, RTX 3070m, 16gb ddr4 3200, 1tb NVME.
Retro PC: Soyo P4S Dragon, 3gb ddr 266, 120gb Maxtor, Geforce Fx 5950 Ultra, SB Live! 5.1

Reply 172 of 279, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
I see no such relation. Some algorithms are very easy to parallelize/optimize. Others are really hard. This difficulty has no re […]
Show full quote
gdjacobs wrote:

Amdahl's Law presents that hard performance limit. It's a reasonable extrapolation that approaching that limit will be a process of diminishing returns and at some point devs are going to bow to pressure from their production team, take their performance wins in hand and call it a day.

I see no such relation.
Some algorithms are very easy to parallelize/optimize. Others are really hard. This difficulty has no relation to how well they may or may not scale based on Amdahl's law.
Thing is, optimizations change algorithms, and therefore they need to be reevaluated according to Amdahl's Law.
Amdahl's Law only gives you the hard limit for a specific implementation.

Amdahl's Law applies not just to the narrow algorithm but in a practical sense to the complete workload. It's to this that I refer.

Scali wrote:
Game physics do not only have dependencies on user input, but also make each physics object dependent on every other physics obj […]
Show full quote
gdjacobs wrote:

Well, in game physics models for one. Never bank against a developer finding ways to use additional compute resources.

Game physics do not only have dependencies on user input, but also make each physics object dependent on every other physics object. Physics simulations are by nature iterative solutions, and iterative solutions are sequential by default. Each iteration depends on the one before.
Physics is one of those things in modern games that has trouble scaling to many cores.
The PPU was an interesting processor, because it was based on network packet switching technology. It could efficiently forward results from one core to the next, allowing you to quickly solve these dependencies and iterations because you could distribute your workload efficiently.
GPGPUs can do this, but only part of the way. CPUs have too much overhead.

Multi body kinetics have been solved in parallel for literally decades.

All hail the Great Capacitor Brand Finder

Reply 173 of 279, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

Amdahl's Law applies not just to the narrow algorithm but in a practical sense to the complete workload. It's to this that I refer.

Same difference.
Workloads are processed by algorithms.
You could either evaluate Amdahl's law for a single algorithm and its workload, or for a compound workload being processed by a series of algorithms.
Doesn't change the fact that as soon as you change anything in your code, you need to re-evaluate its behaviour under Amdahl.
Nor the point that I made: the complexity of implementing/optimizing a given algorithm has no correlation to its parallel scalability whatsoever.

gdjacobs wrote:

Multi body kinetics have been solved in parallel for literally decades.

Yes, it's pretty obvious you can perform things in parallel to a certain degree. That was a given. Not sure why you bring that up.
The problem is scalability, that's what we've been discussing the whole time.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 174 of 279, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:
Same difference. Workloads are processed by algorithms. You could either evaluate Amdahl's law for a single algorithm and its wo […]
Show full quote

Same difference.
Workloads are processed by algorithms.
You could either evaluate Amdahl's law for a single algorithm and its workload, or for a compound workload being processed by a series of algorithms.
Doesn't change the fact that as soon as you change anything in your code, you need to re-evaluate its behaviour under Amdahl.

I agree with all of this.

Scali wrote:

Nor the point that I made: the complexity of implementing/optimizing a given algorithm has no correlation to its parallel scalability whatsoever.

No, but it's parallel scalability and complexity will impact any decision on going ahead with implementation. If you have a limited budget and the approach will be expensive with limited impact in terms of total speed up, it's obviously not worth bothering.

Scali wrote:
gdjacobs wrote:

Multi body kinetics have been solved in parallel for literally decades.

Yes, it's pretty obvious you can perform things in parallel to a certain degree. That was a given. Not sure why you bring that up.
The problem is scalability, that's what we've been discussing the whole time.

Scalability to 16 processes for this type of problem is small potatoes with the proper comms.

All hail the Great Capacitor Brand Finder

Reply 175 of 279, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

Scalability to 16 processes for this type of problem is small potatoes with the proper comms.

Well, most games use standardized physics libraries.
The three 'major players' are:
Havok - owned by Intel
PhysX - owned by nVidia
Bullet - supported by Sony

Not exactly small players. And especially Intel has everything to win from optimizing performance on multi-core machines.
Yet, you will see that the physics in games doesn't scale that well beyond 3-4 cores.
It's not so much that the code isn't optimized, but rather that the scenarios in games don't tend to parallelize that well, because they are rather simplified.
You'd either have to run far more detailed simulations, which cannot run in realtime in the first place, or you'd have to design scenes to explicitly scale well, but then you'd have very synthetic scenes.
Neither are very suitable use-cases for games.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 176 of 279, by PhilsComputerLab

User metadata
Rank l33t++
Rank
l33t++
ODwilly wrote:

I love how cheap am3 stuff is right now.

That's the best part 😁

Though the shops still stock the high end stuff at full prices, like the FX CPUs or 990FX boards.

YouTube, Facebook, Website

Reply 177 of 279, by gdjacobs

User metadata
Rank l33t++
Rank
l33t++
Scali wrote:

It's not so much that the code isn't optimized, but rather that the scenarios in games don't tend to parallelize that well, because they are rather simplified.
You'd either have to run far more detailed simulations, which cannot run in realtime in the first place, or you'd have to design scenes to explicitly scale well, but then you'd have very synthetic scenes.
Neither are very suitable use-cases for games.

Yes, if they're using approximation methods it would certainly complicate the problem.

spiroyster wrote:

'Concurrent' programming tools (I'm assuming this means threading libraries?) have been pretty decent from the off?

I like co-arrays better.

spiroyster wrote:

Problem with physics, is its difficult to fake without going full-blown FEA on it (or some form of FEA within an accelrated structure like a kdtree or something), and a lot of FEA is not mutually exclusive in a lot of cases. So while it can be broken down into digestable packets, not very efficiently without making sacrafices in precision.

Think of a jenga stack a 100 levels high. Remove the bottom one and this has an affect on the top layer, but not directly, it has to process the interaction between all the other ones. The top objects being dependant on all the other objects and that depedenency is what throws a spanner in the works for efficient parrallelisation. The reaction/action of one is not mutually exclusive of the other. Of course this all depends on the level of scale that you want to simulate rigid (or non rigid) body interaction. i.e in a game do you really give a shit about the position of a grain of sand 😀

The bottom objects are also dependent on the top objects. Kinetics from above will shape the deformation when the bottom jenga block is removed.

All hail the Great Capacitor Brand Finder

Reply 178 of 279, by Scali

User metadata
Rank l33t
Rank
l33t
gdjacobs wrote:

Yes, if they're using approximation methods it would certainly complicate the problem.

It's not 'approximation methods' in the sense of code (well, it's an iterative solution, so you could argue that it's always an approximation), but rather that they simplify the geometry for physics.
You don't run physics on the actual geometry you see on screen, but rather on a simplified 'shadow scene' that is only used for physics calculations. In this 'shadow scene', the objects may contain less polys, or are not polygon-based at all (spheres, boxes, ragdolls etc). Certain objects may not have any representation in the 'shadow scene' at all, because they are unaffected by physics.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 179 of 279, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
gdjacobs wrote:

Yes, if they're using approximation methods it would certainly complicate the problem.

What do you mean by 'approximation methods'? Interpolation is done between each frame/state, each frame/state is only broken down into so much detail (time/topology) so its always going to be an approximation. Approximations can be on the money though. Its not like they are always going to be wrong or slow 😀

gdjacobs wrote:

I like co-arrays better.

coarrays, not heard of them o.0 Fortran eh?

While it certainly looks a elegant syntax, this is for parallelising numerical methods (similar to OpenMP)? It's parallelisation model duplicates the program for each 'image' and runs concurrently. This should be used for distributed memory architectures (like clusters), not shared memory architectures (like SMP systems, which OpenMP is good for). Can't see how you could spawn a worker thread with different executable instructions, the thread would be spawned with the entire copy of the programs code, only the 'non-executable' data can differ.

gdjacobs wrote:
spiroyster wrote:

Problem with physics, is its difficult to fake without going full-blown FEA on it (or some form of FEA within an accelrated structure like a kdtree or something), and a lot of FEA is not mutually exclusive in a lot of cases. So while it can be broken down into digestable packets, not very efficiently without making sacrafices in precision.

Think of a jenga stack a 100 levels high. Remove the bottom one and this has an affect on the top layer, but not directly, it has to process the interaction between all the other ones. The top objects being dependant on all the other objects and that depedenency is what throws a spanner in the works for efficient parallelisation. The reaction/action of one is not mutually exclusive of the other. Of course this all depends on the level of scale that you want to simulate rigid (or non rigid) body interaction. i.e in a game do you really give a shit about the position of a grain of sand 😀

The bottom objects are also dependent on the top objects. Kinetics from above will shape the deformation when the bottom jenga block is removed.

Indeed, dependants all round, little room for parallelisation. That was my point.