Scali wrote:Well, there's probably plenty of sites on that.
See here for example: https://renderman.pixar.com/view/movies-and-awards
That's quite a lot of big-name movies every year which use RenderMan.
Ok, you may be right here. Yes that is an impressive resume (and has been annually for a while). This isn't my industry, but I do have acquaintances in this industry (whom I'm sure I have had this conversation with before?), and after quick discussions with some of them.. It does appear, while the tend to favour various tools for modelling and animation, a lot (all 3 actually...yeah I know massive number) are using, or have recently used RenderMan.
Scali wrote:Funny you should ask... Rasterizing is not necessarily a 2d operation. The last few generations of NV hardware support conservat […]
Show full quote
Funny you should ask... Rasterizing is not necessarily a 2d operation. The last few generations of NV hardware support conservative rasterization, which also allows you to render triangles into 3D volumetric textures. One application of this is realtime global illumination:
http://www.geforce.com/whats-new/articl ... f-graphics
http://research.nvidia.com/publication/ ... ne-tracing
https://developer.nvidia.com/vxgi
Voxel cone tracing... not quite rasterizing, not quite raytracing, but something in-between.
Arp, I've heard of those, but not really looked into application of it. It looks like some kinda of 3D spatial radiosity using voxels instead of surface patches? I wonder how it would handle material transmission and transparency, which is traditionally Radiotisty's Achilles heel (my google fu fails to bring up any translucent/transparent surfaces rendered with VXGI, so currently I'm guessing it does suffer o.0). If it ray-marches it, It would probably work, If it rasters it, no ... unless it employed some kinda of per-voxel/per-projected-pixel operation or something? ... in the comments of your first link, nVidia say cone, so no casting.
While on the subject of raytracing on nVidia hardware..they have had their own Optix raytracer, which has been going for years. Early optix demos, while visually impressive, weren't much faster (and slower in some cases) than CPU home grown 'ports' of the demos. This was a while ago now (like 2008/09..I can't remember, but it was pre 2010 iirc)... a lot has probably changed, and Iray (nVidia physically based renderer) uses Optix. Results of VXGI do look impressive.
Still like you say, combinations of methods...at the end of the day, even if it does suffer the same limitations as radiosity...it does bring 'faster' realism to the table which is always welcome. I think I need to understand this a bit better.
Scali wrote:I think this contradiction is exactly my point.
🤣, yeah ok, should have left that bit out o.0. either way you're right though...its all an approximation!, and without letting the MC run for infinity (which would be ironic to optimise for) it's never going to be truly accurate.
Scali wrote:Depends on what you're going for.
Since technically a shadowmap is also a 'first bounce', it makes sense to use a rasterizer.
However, you could also use a raytracer, and just use the shadowmap as a sort of 'cache', so you don't need to test shadow rays against all sorts of objects. All you have to do is sample one 'object': the shadowmap.
Yeah, I was trying to imply that I think a lot of these maps would be generated from 'ray-casting' (be it raytracing - from the eye, path tracing/photon tracing - from the luminaires or raymarching - volumetric ray casting...or even radiosity). The maps themselves could be the result of multi-bounce calculations... once you have these maps, the frame can be rasterized or raytracer. But if the maps are all constructed with raycast methods, simply using just rasterization to present the final image means the rasterizer is nothing more than a glorified blitter? But yes you could rely on rasterization to generate these shadow maps...I guess even soft area lit shadows... we need to leave point lights out though, since these don't exist IRL so no reason to include them in 'realistic' rendering...
meh... in the end it is all doing single 'bounce' raycasting 😀
Scali wrote:The first bounce is usually the most important though. For diffuse surfaces, there are no additional bounces required at all. And for surfaces that are reflective or refractive in some way, the indirect light will usually just be small 'detail', and can be calculated at a lower level-of-detail to speed up rendering without any visible impact.
For diffuse interreflection, there is most certainly a second bounce required. Yes first bounce it is important (and will probably make up the majority of the pixel contribution for that sample), but the secondary+ 'bounce' is what gives 'a better' GI approximation (Direct Interreflection is a part of GI). And its this secondary (higher order) bounce that a rasterizer cannot do without setting up a new view projection from the view point of the intersection. I think the 'small detail' is important and is what is key in generating a 'more accurate' GI approximation, but yes I also appreciate in a lot of cases, realism doesn't perceivably increase with it.
Scali wrote:Say what?
Firstly, as I thought we had covered before, the actual 'shading' is not really part of rasterizing. A shading function will simply require certain input parameters (eye position, lightsources, surface normal, colourmaps etc), and will produce an output colour for that pixel.
In theory, the exact same shading function can be used in a raytracer, a rasterizer, or any other kind of renderer.
Yes we did cover that o.0. I think I'm stuck back in the early 2000's with this argument. When rasterizers, as you so eloquently put it below, 'exploits the fact that a polygon is a linear shape to interpolate all pixels inside the polygon'. It was this dropping of the polygon surface pixels and interpolating from the vertices, which could (and does, as you know) lead to important surface details being lost, which raytracing didn't do by design. However, this per-pixel limitation (the basis of my dated argument) has been mooted with the introduction of Per-Pixel operations (OT) for several years now...o.0
I had pre per pixel op pipelines in my mind for some reason...I don't know what I was banging on about, my argument is too oldskool and doesn’t exist anymore. Agree though, exact same shader (algorithm) could be used providing the same material models are used for both caster/raster. Which for single bounce would probably generate the same result (ignoring any slight nuances the execution hardware does, planetary alignment etc).
Scali wrote:Secondly, why would it only be Lambertian diffuse? Given the above, you can just run the full shading equation for the first bounce directly from the rasterizer, just as if it was a first bounce from a ray.
Perhaps it's time you start seeing polygon rasterizing as simply an optimized first-bounce raytracer. Because in a way, that's what it is. It projects a polygon to screen-space in a way that is equivalent to raytracing. Then it exploits the fact that a polygon is a linear shape to interpolate all pixels inside the polygon.
Instead of sorting surfaces along the ray and picking the nearest one, you sort them with a z-buffer.
I do see it like this. like you quoted me... "a first bounce optimised raytracer"... but these later bounces is what cannot be done with this information...but yes these subsequent/secondary bounces can be deduced by rasterization again. Albeit another pass with different view projection to the initial one. So we can go one further and say .. a rasterizer is a 'single-bounce optimised raytracer' 😉...Given all the pixels wasted in any given rendered frame (apart from the first frame, the eye paths/camera view in which all pixels are used)...it just feels inefficient...but yar... still doable anyway and then I guess it boils down to... what’s faster for this ray casting malarky... acceleration structure, single ray intersection test on CPU say, or raster the geometry to calculate the intersection point (generating an awful lot of results, more than you need)...? I suppose the 'wasted intersection tests essentially are a shite load of samples and could be used as such ... and I suppose, even if the less inefficient method of execution was faster, its still best to go with that if speed is your end goal.
Ignore that Lambertian reference, you are right multi-pass rendering would give all the components you need. I was thinking in a single pass single mvp scene, don't know why, don't ask, I couldn't explain.
A guy called Hugo Elias wrote some good stuff about Radiosity eons ago, and also proposed raster acceleration for just these reasons. Draw scene which each patched surface tile a unique colour. Then the output buffer can be sampled to find the depth tested patch the pixel sees. This 'second order bounce' (not a bounce per se) is not optional with radiosity and then presents a problem since the information from the previous render cannot help since it’s from the perspective of the initial viewport position. To go further, another pass must be done with new view projection etc from point of view of intersection.
This must then be repeated for each intersection. I did once attempt to take this 'one step further' on an SGI 320 which has an UMA architecture (which allowed the onboard gfx to use upto ~80% of the ram present...gfx with 768MB vram in 1998), other than being able to handle huge textures, there was little else this large vram gave but did allow me to store multiple prerendered frames (one for each patch...lets call them ‘projected depth maps’, since they deal with projection from the given patch in the scene). Needless to say, even with 1GB (768 usable), number of polygons were limited and I even found basic scenes giving me problems with mem space. Only one set of rendered projection maps needed (one for each patch in the scene, + frame buffer (eye projections)) but resolution of these were important (detail of the projected hits/misses), and was quite dependant on the scene...either way it was juggling 'high res projected depth maps' against scene patch count. The end result was very hit and miss (no pun intended), although generally fast. I could of course persist all these to disk as images and combine, but keeping them in vram meant geometry changes could be rendered again without these buffer read/write overheads. With radiosity, while the form factors for each patch don't change each pass/iteration, their intensity value calculcted each pass does and is required to be persisted back to map to be queries again next iteration. Faster than CPU bound acceleration structures, but limited with what could be rendered and wasn't real time. Of course, the limitation of the rasterizer (in 1998) would no doubt contribute heavily to this. In fact this sounds similar to VXGI o.0 storing and sampling a traditional trapezoidal frustum/projection map rather than a projected cone like VXGI does...If I'm reading that right?
Scali wrote:No, they approximate it.
Firstly they sample a lot less paths than there would be photons in the real world.
Secondly, the physical lighting models are simplified (and in fact, we don't even understand light in all detail yet, so we couldn't make a fully correct model even if we wanted to).
True, they do approximate it. Yes I will eat all of my words on that. And you are totally right, there are 'possibly'/'probably' many more light paths which should be considered, but are not because the image is 'adequately faked'... yes yes...I munching my hat now too. There are a few phenomenems in light that we still need to understand (including, what is it? Are you a wave? are you a photon? depends how I observe you and under what understanding we have in the domain)... raycasters obviously assume 'Photon' paths (not really considering actual propagation through a medium or vector fields/particles), raymarchers (aka volumetric/voxel) 'can' model this propagation through a medium so they essentially model photon paths and photon payload...(more importantly accurate attenuation and this affect on various payloads during propagation). Photons can also be distorted or 'diffraction/refraction' which is an even moar accurate approximation of the light behaviour.... ultimately though yes it is a faked approximation built on our current understanding o.0...
However, I will point out... irrelevant of how the various calculations are accelerated (if at all)...a lot of methods for approximating realistic lighting/lux are pointing towards brute force 'ray-casting/marching' methods of some description (be it ray-traced, path-traced, voxel-traced/marching) or emulating it...which is essentially what a rasterizer does. But yes I was wrong to bash them. I had a few bevvys, feeling nostalgic, hot day, and had single pass rastering in my mind o.0.
Scali wrote:Well, get yourself a PowerVR GPU then. They are already doing raytracing: https://www.imgtec.com/powervr/ray-tracing/
Been there... they don't do it so much anymore unfortunately (although they do still own Caustic)... I rode that crest for the short time it was breaking (was only out for about 2 years, maybe realistically less than a year). 'Back int fall of 2013' I implemented a PoC using OpenRL. I used it to implement a BDPT (bi directional path tracer) with 'various hacks...ahem optimisations', end goal was a quick photo realistic image of the working design (as always). Usually couple hundred thousand polygons, but potentially in the millions depending on the organic noisy nature of digitized data present in a lot of working scenes. Decent renders were in the 'seconds', results akin to large CUDA core boards..but without any CUDA/OpenCL kernels required.
Here is one of my R2500 boards...
The attachment R2500.jpg is no longer available
I have two, and yes they scale! apparently (although I was CPU bottlenecked by just one on my workstation at the time 🙁). Note the blinkenlights top centre of the board! knight-rider style startup, and then iterating consecutively when all rays in the buffers have been cast 😀, so not only does it give as sense of the 'FPS', it also gives me a reason to buy a Perspex case.
16GB on board o.0 (more than a titanX!), and 2x RTU (Ray-Tracing-Unit... not *G*PU! no nVidia! you should know better Scali 😉 ... and no on board shader execution either, while we are on the subject). however, a lot of the memory is taken by the internal acceleration structures employed (and you have no control over them, so its not like that's 16GB available to the user). but it could take some large models with fast results.
It is essentially hardware accelerated ray-casting, supplemented by a shader language RLSL (YASL 😵)....As the name implies... it's like OpenGL... OpenGL 4.3, with some seasoning of 4.5 architecture (context usage/switching)). Geometry would be uploaded to the boards buffers via OpenRL buffers (primitives/vertices), upload the shaders (a program for each primitive, which consisted of 'vertex shader' shaders and 'ray shader' shaders) and a 'frame shader' (which defined the camera projection, also jittered and could be used for lens effects). These shaders were compiled and run via llvm and executed on the CPU. And there is a bit of the problem, because although the heavy intersection test is accelerated (and many of them), ultimately the general purpose shader is CPU bottlenecked. And mucho shader execution required.
I wasn't entirely convinced by RLSL (limitations in ability to straight forward concepts in some cases), and so opted to do a sort of 'deferred' rendering using the ray casting capabilities of the card to calculate the intersections, normals and other metrics for my g-buffer, then rendered each bounce off-screen. I could then combine these in CPU land, but not through the RLSL framework (which I couldn't do BDPT with directly). I'm sure this shader execution could be siphoned off as compute kernels thus freeing the CPU, and the stalls whittled out...I never got this far though before having to move to another project...when I was allowed to return, OpenRL had been deprecated o.0.
Imagination did tell me it was a PoC from their point of view, present on their PowerVR road map, the technology was to make its way into PowerVR chipsets (but I'm unsure if it ever did). Given recent Apple/Imagination relations, and the Vulkan band-wagon they jumped on (Vulkan maybe not even have been a glint in the milkman’s eye in 2013) I'm unsure how this road map is going right now. o.0