VOGONS

Common searches


Retro CG/CGI discussion group?

Topic actions

Reply 20 of 38, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
vvbee wrote:
spiroyster wrote:

Path tracing is new though, not retro. Majority of path tracers all look the same. Retro imo means no path tracing, no monte carlo... forward tracing only 🤣

It's from the 80s though, and you can even look through old papers for a fix of retro path tracing.

The technique is old (as I said, pretty much everything was invented in the 70s and 80s), but it was not applied as a rendering technique on PCs and home computers in the 80s/90s.

Yes this. The rendering equation is from the 80's, but I don't think this included what we today call path tracing. Radiosity was originally used before 'path tracing' to simulate GI (which is biased, finite form factors, finite patching), which is techincally 'tracing linear paths between patches' in a refining pass manner, and then later (mid 90's) came photon mapping. At some point, somebody applied monte carlo to get a better appoximation to the rendering equation, which suddenly allowed convergence on a near infinite number of path possibilities and then unbiased 'path tracing' was born.. the BDPT, MLT, Russian routlette all followed. Idk, but don't think this was around in the 90's. Even the commercial photo realistic renderers used biased methods until at least 00's (photon mapping in many cases). This is apparent from the lack of unfinished renders with monte carlo noise, rather it would be stuff like low res shadows (radiosity without enough passes/low density patches), occluded geometry, incorrect tiling bugs etc. The only exceptions being the use of volumetric casting for particles, which has been used for a while to get those kinda effects.

In the industry speed was important given the mass of calculations required for a batch of frames, and biased methods produced enough realism while compromising physical accuracy. Most CGI artist don't want physical reliasm, they want photorealism and many biased methods were more than adaquate for simulated GI. It could be argued that we knew all along what to do (rendering equation) but the methods, algorithms and tools availiable came about because of limitations of the hardware and end requirements at the time. Retro methods!

Ironically your gameboy implementation (tres cool by the way!) could potentially be period correct hardware 🤣, but path tracing on DOS? That is 'retro fitting' imo.... (again tres cool by the way!!)

Reply 21 of 38, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:

and then later (mid 90's) came photon mapping. At some point, somebody applied monte carlo to get a better appoximation to the rendering equation, which suddenly allowed convergence on a near infinite number of path possibilities and then unbiased 'path tracing' was born..

I did a photon mapping ray tracer for my Masters at uni back in the early 2000s. And as far as I recall, Monte Carlo techniques came first, and photon mapping was a better alternative, because it could be made more efficient, and the results were easier to filter and avoid noise or other aliasing.
Actually I'm quite surprised that even today a lot of people are using Monte Carlo techniques, and various raytracers still produce very noisy images.
Also, where exactly is the line between 'path tracing' and 'photon mapping'?
I mean, the 'ideal' is to have bidirectional tracing. Photon mapping, strictly speaking, is only tracing rays from the light sources... However, in practice, photon mappers are always bi-directional, and use the photon-mapping pass for indirect lighting, while doing an eye-trace pass for the direct lighting. So, it's bi-directional.
And with photon mapping, you can still use the exact same equations 'both ways', so I'm not sure where the line between biased and unbiased would be either.
Edit: This site gives an explanation:
https://web.archive.org/web/20120607035534/ht … as_in_rendering

So they seem to have a problem with the filtering done on the photon maps. I suppose you could be bothered by filtering, but that is a very impractical way of looking at things.
Besides, I suppose the filtering is the main difference with bidirectional tracing anyway, and filtering in photon mapping is technically optional.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 22 of 38, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

I did a photon mapping ray tracer for my Masters at uni back in the early 2000s. And as far as I recall, Monte Carlo techniques came first, and photon mapping was a better alternative, because it could be made more efficient, and the results were easier to filter and avoid noise or other aliasing.

You might be right, Monte Carlo was after used to model nukes in the 40's/50's. I don't remember it being applied in a practical way to photo realistic rendering though until after photon mapping? I was quite into raytracing about then (on a forum called ompf that spawned out of flipcode). I don't remember Monte Carlo being a 'thing' until a few years later (maybe some papers, but it would have been pure academia then). Photon mapping and refinements like progressive radiosity were all the rage (of course that elusive real-time raytracing was a 'the' big thing). My recollection could be wrong of course.

Scali wrote:

Actually I'm quite surprised that even today a lot of people are using Monte Carlo techniques, and various raytracers still produce very noisy images.

Photon mapping is still very popular too 😀. Monte Carlo will converge, and quite quickly for most intents and purposes, but certain areas (in long thin tunnels etc, will take longer to get that GI result than photon mapping... especially at the other end of the tunnel to the observer). Where Monte Carlo excels is with complex material models since it is done stochastically, can converge on a complex result a lot quicker with greater accuracy than finite samples being taken in the first place on a complex many layered material BxDF.

Scali wrote:

Also, where exactly is the line between 'path tracing' and 'photon mapping'?

Yes good point, since a photon map is essentially the results of modelled paths, yes both perform path tracing with photon mapping being an optimisation which introduces bias.... bias isn't a bad thing though 😀.

Photon mapping has two distinct passes... firing the photons and storing them (the map), then evaluating which paths are visible using an eye cast and querying/sampling the map in a spatial fashion around the visible intersection point. Both passes can be done mutually exclusive of each other (especially if you store the eye casts in a depth map) i.e rays cast from the viewport query the map, and not recast towards a light source. ergo the only single pass and single intersection required for each pixel (or samples per pixel). So rasterizing is an obvious optimisation for the visibility, getting the world coordinate of the first intersection and then use this in the query of the pre-baked photon map. Essentially combining two distinct data sets into another (the final image).

Path tracing doesn't explicitly use maps, instead firing the photons and then waiting to see which ones (after bouncing around) make it through the viewport rather than storing them in a spatial map (very inefficient since most will die without possibly even going near the viewport). BDPT aims to optimise this by firing a ray to the viewport (to check for occlusion) after each bounce so some result can be accumulated for that pixel, yet allowing the 'path' to continue it's bounce and course (could be transmission, not just reflection). This means many more contributions can be made for each path from a light source (at the end of the day, we are trying to model ALL paths, so there will be paths from viewport to each intersection point, occlusion taken into account of course, and rather than wait for those paths to pop up at random over time, we use the contribution from intersection point back along its path to the source since we have that calculation with this 'path trace'.

Photon mapping is thus essentially an approximation built on top of a finite modelled photon map, hence it being called biased because the accuracy of the map depends on the number of samples used to create it. Its not bi-directional because in both cases (photons, and eye rays), the rays only go from source to end. Either from light to end of photon life (multiple bounces) or from eye to visible surface (single bounce), but the 'combining' happens in a separate stage.

Scali wrote:

I mean, the 'ideal' is to have bidirectional tracing. Photon mapping, strictly speaking, is only tracing rays from the light sources... However, in practice, photon mappers are always bi-directional, and use the photon-mapping pass for indirect lighting, while doing an eye-trace pass for the direct lighting. So, it's bi-directional.

In BDPT, each light path not only goes from source to end, but at each intersection there is another path generated (going towards the viewport) hence the bi-directional aspect. i.e each path's contributions are calculated in both directions per path. With photon mapping the eye paths and light paths are mutually exclusive only being combined afterwards.

Scali wrote:

And with photon mapping, you can still use the exact same equations 'both ways', so I'm not sure where the line between biased and unbiased would be either.

Biased as in 'mathematical bias', with a photon map its computed using a finite number of photons in a distinct pass so the accuracy is always limited because the amount of samples that make up the map. If you keep going you will reach a point a convergence, but that itself is dictated by the precision of the map (at some point you won't get a more accurate result by refining because you have already refined beyond the precison of the map). With Monte Carlo (unbiased) if you keep going you will converge, but you stop when its 'good enough', unlike photon mapping, you could continue to get an even more 'accurate' result because there is no limitation in the light contributions you are using to model that paths contribution (not predefined by a discrete map of values, instead modelled stochastically from the material equation/BxDF's involved). It may not be perceivably more accurate than the previous result, but mathematically it is since it doesn't come from a finite population (the photon map) and is instead calculated at that iteration.

As I say, bias isn't bad. Mathematical bias can be loosely attributed to overall error. In which case if the error is small enough (a biased render), for all intents and purposes it's still enough for the observer and appears to be accurate with probably little variation (real small value in precision) so for a human, could look exactly like an unbiased render. Essentially all Monte Carlo images have elements of error (because none are left for infinite amount of time), but the perceived error is always 'small enough' and the bias is not from the data to be modelled, rather the length of time the process/render takes to model.

Reply 23 of 38, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:

You might be right, Monte Carlo was after used to model nukes in the 40's/50's. I don't remember it being applied in a practical way to photo realistic rendering though until after photon mapping? I was quite into raytracing about then (on a forum called ompf that spawned out of flipcode). I don't remember Monte Carlo being a 'thing' until a few years later (maybe some papers, but it would have been pure academia then). Photon mapping and refinements like progressive radiosity were all the rage (of course that elusive real-time raytracing was a 'the' big thing). My recollection could be wrong of course.

Well, for the research we did with photon mapping, we concentrated on rendering caustics through reflection and refraction. We based it on some papers of Henrik Wann Jensen, and iirc Jensen himself also compares with some Monte Carlo renderers, and points out the noise issues with Monte Carlo.

Technically speaking however, 'Monte Carlo' is not a rendering technique, it is basically just a type of fuzzy logic of the form of:

if (rand() < threshold)
ChoosePathA();
else
ChoosePathB();

Where you can vary 'threshold' to get a certain distribution of paths A and B... For example, if you have a surface that is 20% reflective and 80% refractive, you could take a threshold of 0.2 and have path A be the reflective path, and B be the refractive path.

And if you look at it like that, strictly, then photon mapping also applies Monte Carlo.
However, at least back in the day, Monte Carlo appeared to refer more specifically to eye-ray renderers that would simulate diffusion at each intersection by not just reflecting or refracting the single ray, but actually generating a (randomly distributed) 'bundle' of rays.
It was a way to approximate caustics from an eye-trace, but it was very slow and very noisy.

spiroyster wrote:

So rasterizing is an obvious optimisation for the visibility, getting the world coordinate of the first intersection and then use this in the query of the pre-baked photon map. Essentially combining two distinct data sets into another (the final image).

Yes, but don't most 'classic' offline renderers also a special 'first bounce'? Last time I looked, 3dsmax still used a rasterizer for the initial scene, and only fired rays out of the rasterized pixels.

spiroyster wrote:

Photon mapping is thus essentially an approximation built on top of a finite modelled photon map, hence it being called biased because the accuracy of the map depends on the number of samples used to create it. Its not bi-directional because in both cases (photons, and eye rays), the rays only go from source to end. Either from light to end of photon life (multiple bounces) or from eye to visible surface (single bounce), but the 'combining' happens in a separate stage.

Well, any kind of raytracing is an approximation, since they all depend on a limited number of samples. Whether you call these samples rays or photons doesn't really change anything.
Also, any rendering equation is a combination of various terms, so as long as you are evaluating the same equation, it shouldn't matter whether different terms come from different stages or not, as long as they yield the correct (or in practice a good enough approximation) value.

spiroyster wrote:

In BDPT, each light path not only goes from source to end, but at each intersection there is another path generated (going towards the viewport) hence the bi-directional aspect. i.e each path's contributions are calculated in both directions per path. With photon mapping the eye paths and light paths are mutually exclusive only being combined afterwards.

Yes, but again, what is the difference really?
It's like saying deferred rendering does not yield the same results as forward rendering, while in practice you can make both perform the exact same rendering equation if you want.

spiroyster wrote:

Biased as in 'mathematical bias', with a photon map its computed using a finite number of photons in a distinct pass so the accuracy is always limited because the amount of samples that make up the map. If you keep going you will reach a point a convergence, but that itself is dictated by the precision of the map (at some point you won't get a more accurate result by refining because you have already refined beyond the precison of the map). With Monte Carlo (unbiased) if you keep going you will converge, but you stop when its 'good enough', unlike photon mapping, you could continue to get an even more 'accurate' result because there is no limitation in the light contributions you are using to model that paths contribution (not predefined by a discrete map of values, instead modelled stochastically from the material equation/BxDF's involved). It may not be perceivably more accurate than the previous result, but mathematically it is since it doesn't come from a finite population (the photon map) and is instead calculated at that iteration.

Again, you'll have to explain this to me.
Why wouldn't photon mapping converge, and why would Monte Carlo converge? In both cases it's just the result of how many samples you take, and how well your random distribution of these samples is chosen for the specific scenario, right?
Theoretically a photon map is infinite, you can fire infinite photons if you want.
Same goes for any other method... yes you can fire infinite rays, but in practice you don't.
So I don't understand what difference there would be.

Edit:
Perhaps this document has the answer: https://graphics.stanford.edu/courses/cs348b- … h-chapter10.pdf
See Figure 10.2.
It describes bidirectional path tracing as a combination of different special-cases. And case (c) is the case of photon mapping as described by HWJ.
So I suppose there is no difference... or at least, bidirectional tracing can be seen as a superset of photon mapping (and Monte Carlo for that matter).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 24 of 38, by vvbee

User metadata
Rank Oldbie
Rank
Oldbie
spiroyster wrote:

It could be argued that we knew all along what to do (rendering equation) but the methods, algorithms and tools availiable came about because of limitations of the hardware and end requirements at the time. Retro methods!

Tools were available either as given to you by a second instance or made for you by yourself, and certainly there are people who don't mind waiting for weeks for a rendering to be complete (all you need is time if you have a cpu and a program that executes the proper algorithm on a dataset). I'd personally not want to turn 'period correct' into 'bleeding average' and would see it as proper retro to look for path tracing solutions fitting early hardware and software not to mention proper scene design.

I implemented a volumetric photon mapper combined with a path tracer to render clouds once. Good results but that one you wouldn't have computed on old hardware in a sane amount of time.

Reply 25 of 38, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

Well, for the research we did with photon mapping, we concentrated on rendering caustics through reflection and refraction. We based it on some papers of Henrik Wann Jensen, and iirc Jensen himself also compares with some Monte Carlo renderers, and points out the noise issues with Monte Carlo.

Interesting, so perhaps it was considered a method for modelling light long before I thought. It is certainly one of the slowest which perhaps contributes to its use only later. Monte Carlo is embarrassingly parallel and really took off with the GPGPU. Prior to that you needed some heavy SMP stuff.... or a Pentium3 and a couple of nights o.0. Maybe this all led to it not being widely adopted until later.

And yep, Jensen is THE man for Photon Mapping 😀.

Scali wrote:

Technically speaking however, 'Monte Carlo' is not a rendering technique...

Ok... It's an approximation method o.0... is used in (raytraced rendering). Or put another way... a method to approximate near infinite paths of light in a given discretely defined scene and thus 'render' an image.

Like I said it was famously used in the Manhattan project back in the 40's. Probably before that.

Scali wrote:
it is basically just a type of fuzzy logic of the form of: […]
Show full quote

it is basically just a type of fuzzy logic of the form of:

 Select all
if (rand() < threshold)
ChoosePathA();
else
ChoosePathB();

Where you can vary 'threshold' to get a certain distribution of paths A and B... For example, if you have a surface that is 20% reflective and 80% refractive, you could take a threshold of 0.2 and have path A be the reflective path, and B be the refractive path.

Yes this is one way to look at it. rand() being the keyword for it being Monte Carlo 😀. Monte Carlo only comes to life when the number of paths/possibilities is near infinite (or very large) and your PRNG is very random.... otherwise with low sample count it can give some truly 'random' results and its probably better to use other methods.

Ye olde example usually given to describe how Monte Carlo works is by approximating PI.(see wiki). In that example we take a set number of random samples (each x,y) and simply record if it is within the circle or not. Then dividing this by the total number of samples gives an approximation for the ratio of a circles area to its radius aka PI. More smaples, closer approximation to actual result, but each time we don't have to start again... we simply keep sampling as we see fit.

This idea can be extended to all kinds of shapes and volumes (and in fact systems)... granted in these examples better to use numerical methods, but the principle holds. Other problems which numerical methods are not satisfactory can be perhaps better approximated using Monte Carlo methods. Approximating ALL possible light paths in a scene is one such application.

Scali wrote:

And if you look at it like that, strictly, then photon mapping also applies Monte Carlo.

You can use Monte Carlo during the baking process. But once baked it is finite number of paths in the map and so any results used in this map essentially make it biased. If you were to take each photon and calculate its subsequent eye/visibility, using this in the final image, then yes, it could be argued that it is a 'Monte Carlo' unbiased render because you are using the original 'monte carlo' approximated values/path results in their entirety. No extra paths calculated, no sampling of precalced results. The 'map' becomes nothing more than a container for storage.... containing a shite load of photon paths.... and I mean shite load!

For photon mapping, the point of intersection for the eye path may not (probably not) directly correlate to a photon (only if photon and eye intersection points coexist), so instead a sphere of influence spatially samples the photons form the map. Because there is only a finite amount of photons in the map (let alone if any within your sphere of influence) you are sampling from a restricted domain (only the photons you previously modelled/baked... Monte Carlo or not) so it becomes biased.

A BDPT, Each bounce samples randomly from the entire domain (no look up of a precalced value from a map) and applies the correct probability distribution. Since it didn't sample from a finite set of paths (instead calculates a new one/many and weights it accordingly to distribution), it is thus considered unbiased.

You cannot recalculate a photon map mid render (so you are stuck sampling the 'baked' paths) without potentially invalidating previous casts (since they were sampled from map before its content changed).

Of course... if it LGTY... then its good enough, but if you want physical accuracy, it might not be 'accurate' enough.

Scali wrote:

Yes, but don't most 'classic' offline renderers also a special 'first bounce'? Last time I looked, 3dsmax still used a rasterizer for the initial scene, and only fired rays out of the rasterized pixels.

It wouldn't surprise me, its an obvious optimisation. Traditionally offline renders would have been assumed to be used on large clusters (render farms). These may or may not have enough hardware accelerated 'fill rate' to warrant use, but then again who knows. OF course for one man band rendering at home, they are likely to have spare raster hardware to perhaps help out. Radiosity could even be entirely accelerated using rasterization, rendering a frame in position of each patch and then sampling that.

Scali wrote:

Well, any kind of raytracing is an approximation, since they all depend on a limited number of samples. Whether you call these samples rays or photons doesn't really change anything.

Of course, didn't mean to suggest path tracing isn't an approximation o.0. It's all essentially modelling visible light propagation.

Scali wrote:

Also, any rendering equation is a combination of various terms, so as long as you are evaluating the same equation, it shouldn't matter whether different terms come from different stages or not, as long as they yield the correct (or in practice a good enough approximation) value.

Not following you here? No it doesn't matter where the terms come from, but error propagation can occur. Yes you can mix and match, but only for the final 'combining' stage. IF you were to then use this result in a subsequent iteration, its error (if any) might amplify over time. With knowledge of the correct probability distribution, you can weight it accordingly.

Scali wrote:

Yes, but again, what is the difference really?
It's like saying deferred rendering does not yield the same results as forward rendering, while in practice you can make both perform the exact same rendering equation if you want.

Visually to the human eye, there can be no difference. Its the means by which it gets there. Both methods have their pros and cons. And in some cases this is dependant on the type of scene/geometry you are rendering.

Scali wrote:

Again, you'll have to explain this to me.
Why wouldn't photon mapping converge, and why would Monte Carlo converge? In both cases it's just the result of how many samples you take, and how well your random distribution of these samples is chosen for the specific scenario, right?

Agree, both would 'converge', but since the photon map is predefined and not calculated on a per-path basis it discrete and so only has certain precision. High density photon maps would no doubt converge to an acceptable result a lot faster than Monte Carlo, but at some point the resolution of the photon map becomes a problem (many more eye samples taken from a small number of path results in the map). At this point Monte Carlo won't suffer because it doesn't have a 'resolution' of the samples it can take, instead takes a random sample correctly weighted for the distribution.

Scali wrote:

Theoretically a photon map is infinite, you can fire infinite photons if you want.

No, a photon map is finite with discrete precalculated photons paths. There are infinite possible paths that can be calculated, but are stored in a finite resolution map.
Obviously, larger resolution means more possible paths can be stored in it, and means a better 'approximation'. You can of course mix the two (sample a photon map, and then cast more rays to calculate more paths after an intersection), but you need to adjust the weighting of the samples used accordingly (to retain physical correctness). As I said above, you cannot recalculate a photon map mid render without potentially invalidating previous casts. So it is finite, and thus immutable per frame... good for static geometry, not soo much for dynamic because the photon map changes when the geometry changes (needs rebaking)... and of course adds 'mathematical bias'.

Scali wrote:

Same goes for any other method... yes you can fire infinite rays, but in practice you don't.
So I don't understand what difference there would be.

And thats where Monte Carlo comes in... as I said above, it becomes useful when there are a large number of possibilities (infinite paths) and large number of samples (say 5000 samples per pixel). Instead of calculating infinite/ALL paths (not possible), we can do one of many things:

* Calculate a large set number of paths and then spatially sample to get an average within a sphere of influence from intersection of visible eye path (photon mapping).

* Or keep iteratively calculating subsequent bounced/refracted paths when evaluated after a BxDF... casting from both eye (forward aka ray-tracing), light (backward aka path-tracing) or both (bi-directional) each path trace during render... no predefined caching.

Photon mapping doesn't do this each path trace, instead samples one or more predefined photon maps. Great for caustics (which would take path tracing an age to get right), but not so much specular highlighting (since the latter is view dependant... view direction is not something not stored in a photon map so needs to be calculated on the fly... something which BDPT is not bothered about since nothing is precached. Yes you can sample a photon map at various points in addition to other calculation going on for a path... but use of the map makes it biased.

Reply 26 of 38, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
vvbee wrote:
spiroyster wrote:

It could be argued that we knew all along what to do (rendering equation) but the methods, algorithms and tools availiable came about because of limitations of the hardware and end requirements at the time. Retro methods!

Tools were available either as given to you by a second instance or made for you by yourself, and certainly there are people who don't mind waiting for weeks for a rendering to be complete (all you need is time if you have a cpu and a program that executes the proper algorithm on a dataset). I'd personally not want to turn 'period correct' into 'bleeding average' and would see it as proper retro to look for path tracing solutions fitting early hardware and software not to mention proper scene design.

I implemented a volumetric photon mapper combined with a path tracer to render clouds once. Good results but that one you wouldn't have computed on old hardware in a sane amount of time.

Of course, all of this stuff was fairly slow back then. I guess it depends if you define Retro CGI as, modern day CGI on retro hardware, or retro CGI on retro/modern hardware... I was thinking the latter personally but yes there is something spcial about getting old hardware (like GBA... or abacus o.0) to number crunch modern algorithms.

Reply 27 of 38, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:
Scali wrote:

Well, for the research we did with photon mapping, we concentrated on rendering caustics through reflection and refraction. We based it on some papers of Henrik Wann Jensen, and iirc Jensen himself also compares with some Monte Carlo renderers, and points out the noise issues with Monte Carlo.

Interesting, so perhaps it was considered a method for modelling light long before I thought. It is certainly one of the slowest which perhaps contributes to its use only later. Monte Carlo is embarrassingly parallel and really took off with the GPGPU. Prior to that you needed some heavy SMP stuff.... or a Pentium3 and a couple of nights o.0. Maybe this all led to it not being widely adopted until later.

And yep, Jensen is THE man for Photon Mapping 😀.

This is one of the papers we used back in the day: http://graphics.ucsd.edu/~henrik/papers/photo … maps_egwr96.pdf
It's from 1996, and already mentions Monte Carlo techniques for global illumination. And as he says, MC is slow and noisy. Probably why MC always had a negative connotation with me.

spiroyster wrote:

A BDPT, Each bounce samples randomly from the entire domain (no look up of a precalced value from a map) and applies the correct probability distribution. Since it didn't sample from a finite set of paths (instead calculates a new one/many and weights it accordingly to distribution), it is thus considered unbiased.

But... Aren't you then biasing every single sample by some arbitrary metric? You could argue that photon mapping is theoretically 'correct'... Because light sources actually do emit photons, and the amount of photons they emit *is* limited.

So a photonmap is consistent in the sense that the approximation factor of the amount of photons is the same throughout the scene, rather than incremented/decremented based on <insert random metric here>.

spiroyster wrote:

It wouldn't surprise me, its an obvious optimisation. Traditionally offline renders would have been assumed to be used on large clusters (render farms). These may or may not have enough hardware accelerated 'fill rate' to warrant use, but then again who knows. OF course for one man band rendering at home, they are likely to have spare raster hardware to perhaps help out. Radiosity could even be entirely accelerated using rasterization, rendering a frame in position of each patch and then sampling that.

Why hardware acceleration?
I don't think 3dsmax was ever hardware-accelerated. Raytracing is just stupidly inefficient, and rasterizing is incredibly efficient, even a pure software implementation. So it always was a good trade-off. One major advantage of rasterizing is that you can get it very stable, and with all-integer solutions. No 'holes' in your models, no 'seams' between polygons or anything. And because you are already interpolating gradients over a perspective plane, it's relatively simple to implement things like texture filtering taking the anisotropy into account.

Scali wrote:

Well, any kind of raytracing is an approximation, since they all depend on a limited number of samples. Whether you call these samples rays or photons doesn't really change anything.

Of course, didn't mean to suggest path tracing isn't an approximation o.0. It's all essentially modelling visible light propagation.

spiroyster wrote:

Agree, both would 'converge', but since the photon map is predefined and not calculated on a per-path basis it discrete and so only has certain precision.

I would argue that this is physically correct. Light doesn't change precision based on where your eye is in the scene, or anything like that.
Monte Carlo sounds like more of a 'hack'. Sorta like how many raytracers lack robustness and precision in their calculations, so people just bruteforce it with more AA to blend out the wrong pixels.
Practical, yes... And in some ways perhaps 'mathematically correct'. In other ways certainly not.

spiroyster wrote:

No, a photon map is finite with discrete precalculated photons paths. There are infinite possible paths that can be calculated, but are stored in a finite resolution map.

Sure, the resolution is finite, limited by the resolution you choose for your calculations. Then again, that's the same for any other calculation done on a digital processor.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 28 of 38, by subhuman@xgtx

User metadata
Rank Oldbie
Rank
Oldbie

Chaps, do any of you happen to have a faint idea of what program was used to render those lovely specular shiny ass pieces of fine cgi art? (circa early to mid 1995.)

Early 3DStudio/PoVRay?

ybkTXHel.jpg rl3L9U5.jpg?1
yW78h5ql.jpg

7fbns0.png

tbh9k2-6.png

Reply 29 of 38, by snorg

User metadata
Rank Oldbie
Rank
Oldbie

It's really hard to say what application/renderer was used on those. Could be just about any one that was commonly available at that time. What time period are we talking about? If we are talking late 80s or early 90s, my money is on Povray or one of the early Amiga programs like Imagine or Sculpt 4D. If it is from the mid to late 90s it could be something like Truespace, RayDream or one of the later Imagine renderers. I don't think it would be Lightwave or 3D Studio or any of the SGI apps, those would typically be out of the range of your average consumer due to cost (I'm assuming this would be a younger person, say 13-20 with fewer funds, possibly using one of the demo versions of the above software---except Povray which is of course free).

Reply 30 of 38, by Scali

User metadata
Rank l33t
Rank
l33t

Did anyone actually use Povray professionally?
I mean, my impression of Povray was always that it was some freeware/opensource toy raytracer. I never heard of anyone actually using it professionally.
I'd expect 3dstudio, LightWave, Imagine and that sort of thing.
Pirating was common in those days, and I know I got a free copy of Imagine with an Amiga Format back then. So even for regular consumers it would not be too difficult to get their hands on 'the real stuff' either.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 31 of 38, by snorg

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
Did anyone actually use Povray professionally? I mean, my impression of Povray was always that it was some freeware/opensource t […]
Show full quote

Did anyone actually use Povray professionally?
I mean, my impression of Povray was always that it was some freeware/opensource toy raytracer. I never heard of anyone actually using it professionally.
I'd expect 3dstudio, LightWave, Imagine and that sort of thing.
Pirating was common in those days, and I know I got a free copy of Imagine with an Amiga Format back then. So even for regular consumers it would not be too difficult to get their hands on 'the real stuff' either.

I don't think anyone used Povray professionally, sorry if I was unclear in my explanation. While I suppose someone might have used a cracked copy of 3DS, Lightwave or demo copy of Imagine or Truespace, I was trying to say that a hobbyist would either be using a cracked copy of some major 3D app, but obviously wouldn't have access to anything running on an SGI (in the 90s an SGI probably would have still been somewhat expensive, even on the used market, unless it was very very old like a Personal Iris---and the software would have been much more) or they would have been using a cheaper consumer application or freeware. There were only a few sub $500 programs around that time: Imagine, Truespace, RayDream and of course freeware like Povray. Imagine and Truespace in particular often had free demo versions on cover disks and so on.

It is really quite impossible to tell from looking at any given picture what 3D application was used to render it, though. People used to claim back in the day that the 3DS renderer tended to make everything look "plastic like", however I think that is more a failure of the texturing or lighting abilities of the artist than anything inherently wrong with the renderer. Being a scanline renderer, there were limitations to what 3D Studio could do at the time, you couldn't have accurate reflection or refraction you had to fake reflection using environment maps. You also had to fake volumetric lighting with the early versions.

Reply 32 of 38, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

But... Aren't you then biasing every single sample by some arbitrary metric? You could argue that photon mapping is theoretically 'correct'... Because light sources actually do emit photons, and the amount of photons they emit *is* limited.

I wouldn't say arbitary. At least not in a deterministic way. It's statatistical. o.0.

Yes photons are modelled, and yes they can have an overall finite weighting over a given time frame... but its the distribution of those photons and the paths they take as they bounce which is in effect lowered because only a limited nmuber of photons are modelled (biased), and needs to do this in a less coherent due to less overall photons with larger magnitude/payload. So while the overall power out is finite, and set, how they behave in the scene is not exactly like it does in reality.... its like clumping large numbers of smaller photons into one big photon and modelling just the big photon (it should be more incoherent).

This is the case in all simulations, it's just that monte carlo can model paths and then based on probablity, analyse what the overall result would be. It can keep sampling and at some point it will converge on a result... at that point there is no need to keep going.

Scali wrote:

So a photonmap is consistent in the sense that the approximation factor of the amount of photons is the same throughout the scene, rather than incremented/decremented based on <insert random metric here>.

Yes but its the distribution that isn't modelled accurately enough... because a limited number of photons are sampled, and the overall power divided by them. So total power correct.

Scali wrote:

Why hardware acceleration?
I don't think 3dsmax was ever hardware-accelerated. Raytracing is just stupidly inefficient, and rasterizing is incredibly efficient, even a pure software implementation. So it always was a good trade-off. One major advantage of rasterizing is that you can get it very stable, and with all-integer solutions. No 'holes' in your models, no 'seams' between polygons or anything. And because you are already interpolating gradients over a perspective plane, it's relatively simple to implement things like texture filtering taking the anisotropy into account.

Yes, I ain't dissing rasterization, but it is limited to triangles (CSG was popular for primitives), and can suffer overdraw. But yes for determining visible intersection points, single bounce...very nice. Its effectivenes could be considered overkill beyond that.

Scali wrote:

I would argue that this is physically correct. Light doesn't change precision based on where your eye is in the scene, or anything like that.

No but its not distributed enough in the first place. Specular highlighting is considered View dependant though... observering the laws of power conservation obviously.

Scali wrote:

Monte Carlo sounds like more of a 'hack'.

As much of a hack as any other applied model to simulate real life o.0 It just another modelling method, one which in certain conditions is advatageous to use.

Scali wrote:

Sorta like how many raytracers lack robustness and precision in their calculations, so people just bruteforce it with more AA to blend out the wrong pixels.

Cynical... but yes I guess that is one way of looking at it... but the robustnes is based on mathematical principles, in this case probalistic rather than deterministic. There are no wrong pixels, just samples which may or may not be anything like the final result, but essentially are part of its makeup.

Scali wrote:

Sure, the resolution is finite, limited by the resolution you choose for your calculations. Then again, that's the same for any other calculation done on a digital processor.

By preciosn in this context, I mean precision of the data set to use (size of population/number of photons modelled) rather than numerical precision.

subhuman@xgtx wrote:

Chaps, do any of you happen to have a faint idea of what program was used to render those lovely specular shiny ass pieces of fine cgi art? (circa early to mid 1995.)

Echo what others said... difficult to say. Given that it's Namco, around 95.. Nichimen (N-World?) might be a candidate. It looks like multiple different engines are used in the different images.

Scali wrote:

Did anyone actually use Povray professionally?

🤣...Yes they did! In certain bespoke CAD circles it was used upto at least 2013 (and might still be used). In a previous employment I had to fix some POVray script exporter. With effort they can look quite good, and certinaly from the perspective of development... you don't need to support multiple effects via export options/gui and instead simply spit out the geometry, scene setup and material stubs and then let the user edit the human readable script to their hearts content. Its slow though... but yes free.

POVray was one of many rendering exports for the program, was quite surprised to actual get a ticket in realation to it past 2010 o.0.

snorg wrote:

free demo versions on cover disks

Scali wrote:

I got a free copy of Imagine with an Amiga Format back then

Yep got my first exposure to a 3D program called Imagine 2.0... courtesy of a CU Amiga cover disk 😀

Reply 33 of 38, by Scali

User metadata
Rank l33t
Rank
l33t

I'd also like to point out that in the 80s and early 90s, a lot of people developed their own raytracers. It was quite a popular 'effect' in Amiga and early DOS demos to include either raytraced stills or animations, to show off their raytracers.
I always felt like a (Whitted) raytracer is a bit like the 'Hello World' of graphics. Polygon rasterizers are more difficult to get right, especially within the limitations of 80s/early 90s hardware (no FPU, too slow for z-buffering etc).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 34 of 38, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

I always felt like a (Whitted) raytracer is a bit like the 'Hello World' of graphics.

Indeed... Shiny balls and an infinite checkerboard... look mummy I made a picture from equations... 😵

Scali wrote:

Polygon rasterizers are more difficult to get right, especially within the limitations of 80s/early 90s hardware (no FPU, too slow for z-buffering etc).

Something I'm not very experienced with and like to do more when I find the time o.0

I've always had the luxury of floats, and true colour 🤣

Reply 35 of 38, by snorg

User metadata
Rank Oldbie
Rank
Oldbie

I ran across this Imgur album and that got me thinking: could a mid-end 286 or low-end 386 handle a flat-shaded polygon racing game?
Or would wireframe work better? If I recall correctly, Hard Drivin' was flat shaded 3D, but only one vehicle. You'd have multiple cars onscreen in a true racer so that might be too much for that sort of system.

https://imgur.com/gallery/VcPNv

Reply 36 of 38, by Scali

User metadata
Rank l33t
Rank
l33t
snorg wrote:

I ran across this Imgur album and that got me thinking: could a mid-end 286 or low-end 386 handle a flat-shaded polygon racing game?

Yes, there were some 3d polygon racing games back in the day.
For example, Indy 500: https://youtu.be/mFqmzVhsgcU
And Test Drive 3: https://youtu.be/JH8DDIg0Y2Y
And Stunts: https://youtu.be/bbmhwN4gxEs

Those will all run fine on a 286-16.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 37 of 38, by snorg

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
Yes, there were some 3d polygon racing games back in the day. For example, Indy 500: https://youtu.be/mFqmzVhsgcU And Test Drive […]
Show full quote
snorg wrote:

I ran across this Imgur album and that got me thinking: could a mid-end 286 or low-end 386 handle a flat-shaded polygon racing game?

Yes, there were some 3d polygon racing games back in the day.
For example, Indy 500: https://youtu.be/mFqmzVhsgcU
And Test Drive 3: https://youtu.be/JH8DDIg0Y2Y
And Stunts: https://youtu.be/bbmhwN4gxEs

Those will all run fine on a 286-16.

You know, I completely forgot those other ones. What do you figure the limit is on polygon count in order to get close to 30fps on that class of machine?

Reply 38 of 38, by xjas

User metadata
Rank l33t
Rank
l33t

^^ Stunts didn't run anywhere near 30FPS, even on a 386/33 or 486. It wasn't designed to. Some demoscene prods probably spun cubes or dodecahedrons near that speed on that kind of hardware but I imagine doing so in a full-screen 3D 'world' would be out of reach.

Space Station Oblivion and Total Eclipse are also interesting examples of immersive 3D worlds that run on a 286. But you'd be lucky to see 5FPS out of them.

twitch.tv/oldskooljay - playing the obscure, forgotten & weird - most Tuesdays & Thursdays @ 6:30 PM PDT. Bonus streams elsewhen!