VOGONS


Direct3D, "Shaders", nomenclature fun

Topic actions

Reply 20 of 33, by Scali

User metadata
Rank l33t
Rank
l33t
agent_x007 wrote:

1) How much does Photogrammetry help with rendering quality vs Raytracing (I mean, effects can look really nice to me) ?

It's just a tool that you can use with a variety of rendering techniques.
Certain games develop their 3D characters by building an actual clay model or such, and then scanning it for the geometry.
Does motion capture also fall in that category? It is used a lot in games.

agent_x007 wrote:

2) (On Topic) I guess RV200 (Radeon 7500), has the same capabilities and limitations as R100 ?

Pretty much yes, according to this: http://www.anandtech.com/show/811/2

The second chip being announced today is the RV200.  In spite of the name, you should think of the RV200 as a 0.15-micron Radeon because, essentially, that's what it is.  The RV200 has the same features as the original Radeon with two changes: the memory controller from the R200 and the display engine from the RV100 (Radeon VE).  The memory controller from the R200 gives the RV200 the 256-bit memory accesses and nothing more -- it's still a 128-bit wide DDR memory interface.  The RV100's display engine gives the RV200 HydraVision support, which is ATI's dual display solution.  This is actually also present on the R200 core.

agent_x007 wrote:

3) Compute Shader was introduced in DirectX 10 (when NV went "CUDA" for the first time - G80 chip).

No, DX10 does not have support for Compute Shader.
You could use CUDA in combination with DX9, DX10 or OpenGL though, but that was not a 'unified API'.
It took years after CUDA for OpenCL and DirectCompute to emerge. NV was way ahead of its time with the G80.
See also: https://en.wikipedia.org/wiki/DirectCompute

DirectCompute is part of the Microsoft DirectX collection of APIs, and was initially released with the DirectX 11 API but runs on graphics processing units that use either DirectX 10 or DirectX 11.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 21 of 33, by swaaye

User metadata
Rank l33t++
Rank
l33t++

I believe RV200 has more anisotropic filtering flexibility. You can select intermediate filtering levels whereas the R100 only had 16X available. It also has the ability to run asynchronous RAM / core clocks. R100 will freeze if you try this. That's what I recall offhand.

Reply 22 of 33, by agent_x007

User metadata
Rank Oldbie
Rank
Oldbie

@up Thank you.

Scali wrote:
No, DX10 does not have support for Compute Shader. You could use CUDA in combination with DX9, DX10 or OpenGL though, but that w […]
Show full quote
agent_x007 wrote:

3) Compute Shader was introduced in DirectX 10 (when NV went "CUDA" for the first time - G80 chip).

No, DX10 does not have support for Compute Shader.
You could use CUDA in combination with DX9, DX10 or OpenGL though, but that was not a 'unified API'.
It took years after CUDA for OpenCL and DirectCompute to emerge. NV was way ahead of its time with the G80.
See also: https://en.wikipedia.org/wiki/DirectCompute

DirectCompute is part of the Microsoft DirectX collection of APIs, and was initially released with the DirectX 11 API but runs on graphics processing units that use either DirectX 10 or DirectX 11.

I thought CUDA and ATI's "Stream" were done on compute shader... don't know why.

Thank you.

157143230295.png

Reply 23 of 33, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

Apparently both G70 (NV40?) and R5xx had capabilities, even if extremely limited.
https://www.youtube.com/watch?v=gLgb9AdnaBI

Can't find demo for Nvidia though. I remember it was bowling with lots of dinosaur skeletons.

swaaye wrote:

You can select intermediate filtering levels whereas the R100 only had 16X available

Even when forced to 2/4/8 in Serious Sam? I

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 24 of 33, by Scali

User metadata
Rank l33t
Rank
l33t
agent_x007 wrote:

I thought CUDA and ATI's "Stream" were done on compute shader... don't know why.

It's the other way around:
CUDA and ATi Stream were introduced as proprietary GPGPU solutions (there have been GPGPU experiments long before that though, I believe the first time I read about it, people were researching the possibilities on a Radeon 8500).
This was later standardized into 'Compute Shaders' in DX11/DirectCompute and OpenCL. So the hardware existed before these APIs were around.
You need the DX11 API to use compute shaders, but most DX10-era GPUs support it.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 25 of 33, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

Well, there's probably plenty of sites on that.
See here for example: https://renderman.pixar.com/view/movies-and-awards
That's quite a lot of big-name movies every year which use RenderMan.

Ok, you may be right here. Yes that is an impressive resume (and has been annually for a while). This isn't my industry, but I do have acquaintances in this industry (whom I'm sure I have had this conversation with before?), and after quick discussions with some of them.. It does appear, while the tend to favour various tools for modelling and animation, a lot (all 3 actually...yeah I know massive number) are using, or have recently used RenderMan.

Scali wrote:
Funny you should ask... Rasterizing is not necessarily a 2d operation. The last few generations of NV hardware support conservat […]
Show full quote

Funny you should ask... Rasterizing is not necessarily a 2d operation. The last few generations of NV hardware support conservative rasterization, which also allows you to render triangles into 3D volumetric textures. One application of this is realtime global illumination:
http://www.geforce.com/whats-new/articl ... f-graphics
http://research.nvidia.com/publication/ ... ne-tracing
https://developer.nvidia.com/vxgi

Voxel cone tracing... not quite rasterizing, not quite raytracing, but something in-between.

Arp, I've heard of those, but not really looked into application of it. It looks like some kinda of 3D spatial radiosity using voxels instead of surface patches? I wonder how it would handle material transmission and transparency, which is traditionally Radiotisty's Achilles heel (my google fu fails to bring up any translucent/transparent surfaces rendered with VXGI, so currently I'm guessing it does suffer o.0). If it ray-marches it, It would probably work, If it rasters it, no ... unless it employed some kinda of per-voxel/per-projected-pixel operation or something? ... in the comments of your first link, nVidia say cone, so no casting.

While on the subject of raytracing on nVidia hardware..they have had their own Optix raytracer, which has been going for years. Early optix demos, while visually impressive, weren't much faster (and slower in some cases) than CPU home grown 'ports' of the demos. This was a while ago now (like 2008/09..I can't remember, but it was pre 2010 iirc)... a lot has probably changed, and Iray (nVidia physically based renderer) uses Optix. Results of VXGI do look impressive.

Still like you say, combinations of methods...at the end of the day, even if it does suffer the same limitations as radiosity...it does bring 'faster' realism to the table which is always welcome. I think I need to understand this a bit better.

Scali wrote:

I think this contradiction is exactly my point.

🤣, yeah ok, should have left that bit out o.0. either way you're right though...its all an approximation!, and without letting the MC run for infinity (which would be ironic to optimise for) it's never going to be truly accurate.

Scali wrote:

Depends on what you're going for.
Since technically a shadowmap is also a 'first bounce', it makes sense to use a rasterizer.
However, you could also use a raytracer, and just use the shadowmap as a sort of 'cache', so you don't need to test shadow rays against all sorts of objects. All you have to do is sample one 'object': the shadowmap.

Yeah, I was trying to imply that I think a lot of these maps would be generated from 'ray-casting' (be it raytracing - from the eye, path tracing/photon tracing - from the luminaires or raymarching - volumetric ray casting...or even radiosity). The maps themselves could be the result of multi-bounce calculations... once you have these maps, the frame can be rasterized or raytracer. But if the maps are all constructed with raycast methods, simply using just rasterization to present the final image means the rasterizer is nothing more than a glorified blitter? But yes you could rely on rasterization to generate these shadow maps...I guess even soft area lit shadows... we need to leave point lights out though, since these don't exist IRL so no reason to include them in 'realistic' rendering...

meh... in the end it is all doing single 'bounce' raycasting 😀

Scali wrote:

The first bounce is usually the most important though. For diffuse surfaces, there are no additional bounces required at all. And for surfaces that are reflective or refractive in some way, the indirect light will usually just be small 'detail', and can be calculated at a lower level-of-detail to speed up rendering without any visible impact.

For diffuse interreflection, there is most certainly a second bounce required. Yes first bounce it is important (and will probably make up the majority of the pixel contribution for that sample), but the secondary+ 'bounce' is what gives 'a better' GI approximation (Direct Interreflection is a part of GI). And its this secondary (higher order) bounce that a rasterizer cannot do without setting up a new view projection from the view point of the intersection. I think the 'small detail' is important and is what is key in generating a 'more accurate' GI approximation, but yes I also appreciate in a lot of cases, realism doesn't perceivably increase with it.

Scali wrote:

Say what?
Firstly, as I thought we had covered before, the actual 'shading' is not really part of rasterizing. A shading function will simply require certain input parameters (eye position, lightsources, surface normal, colourmaps etc), and will produce an output colour for that pixel.
In theory, the exact same shading function can be used in a raytracer, a rasterizer, or any other kind of renderer.

Yes we did cover that o.0. I think I'm stuck back in the early 2000's with this argument. When rasterizers, as you so eloquently put it below, 'exploits the fact that a polygon is a linear shape to interpolate all pixels inside the polygon'. It was this dropping of the polygon surface pixels and interpolating from the vertices, which could (and does, as you know) lead to important surface details being lost, which raytracing didn't do by design. However, this per-pixel limitation (the basis of my dated argument) has been mooted with the introduction of Per-Pixel operations (OT) for several years now...o.0

I had pre per pixel op pipelines in my mind for some reason...I don't know what I was banging on about, my argument is too oldskool and doesn’t exist anymore. Agree though, exact same shader (algorithm) could be used providing the same material models are used for both caster/raster. Which for single bounce would probably generate the same result (ignoring any slight nuances the execution hardware does, planetary alignment etc).

Scali wrote:

Secondly, why would it only be Lambertian diffuse? Given the above, you can just run the full shading equation for the first bounce directly from the rasterizer, just as if it was a first bounce from a ray.

Perhaps it's time you start seeing polygon rasterizing as simply an optimized first-bounce raytracer. Because in a way, that's what it is. It projects a polygon to screen-space in a way that is equivalent to raytracing. Then it exploits the fact that a polygon is a linear shape to interpolate all pixels inside the polygon.
Instead of sorting surfaces along the ray and picking the nearest one, you sort them with a z-buffer.

I do see it like this. like you quoted me... "a first bounce optimised raytracer"... but these later bounces is what cannot be done with this information...but yes these subsequent/secondary bounces can be deduced by rasterization again. Albeit another pass with different view projection to the initial one. So we can go one further and say .. a rasterizer is a 'single-bounce optimised raytracer' 😉...Given all the pixels wasted in any given rendered frame (apart from the first frame, the eye paths/camera view in which all pixels are used)...it just feels inefficient...but yar... still doable anyway and then I guess it boils down to... what’s faster for this ray casting malarky... acceleration structure, single ray intersection test on CPU say, or raster the geometry to calculate the intersection point (generating an awful lot of results, more than you need)...? I suppose the 'wasted intersection tests essentially are a shite load of samples and could be used as such ... and I suppose, even if the less inefficient method of execution was faster, its still best to go with that if speed is your end goal.

Ignore that Lambertian reference, you are right multi-pass rendering would give all the components you need. I was thinking in a single pass single mvp scene, don't know why, don't ask, I couldn't explain.

A guy called Hugo Elias wrote some good stuff about Radiosity eons ago, and also proposed raster acceleration for just these reasons. Draw scene which each patched surface tile a unique colour. Then the output buffer can be sampled to find the depth tested patch the pixel sees. This 'second order bounce' (not a bounce per se) is not optional with radiosity and then presents a problem since the information from the previous render cannot help since it’s from the perspective of the initial viewport position. To go further, another pass must be done with new view projection etc from point of view of intersection.
This must then be repeated for each intersection. I did once attempt to take this 'one step further' on an SGI 320 which has an UMA architecture (which allowed the onboard gfx to use upto ~80% of the ram present...gfx with 768MB vram in 1998), other than being able to handle huge textures, there was little else this large vram gave but did allow me to store multiple prerendered frames (one for each patch...lets call them ‘projected depth maps’, since they deal with projection from the given patch in the scene). Needless to say, even with 1GB (768 usable), number of polygons were limited and I even found basic scenes giving me problems with mem space. Only one set of rendered projection maps needed (one for each patch in the scene, + frame buffer (eye projections)) but resolution of these were important (detail of the projected hits/misses), and was quite dependant on the scene...either way it was juggling 'high res projected depth maps' against scene patch count. The end result was very hit and miss (no pun intended), although generally fast. I could of course persist all these to disk as images and combine, but keeping them in vram meant geometry changes could be rendered again without these buffer read/write overheads. With radiosity, while the form factors for each patch don't change each pass/iteration, their intensity value calculcted each pass does and is required to be persisted back to map to be queries again next iteration. Faster than CPU bound acceleration structures, but limited with what could be rendered and wasn't real time. Of course, the limitation of the rasterizer (in 1998) would no doubt contribute heavily to this. In fact this sounds similar to VXGI o.0 storing and sampling a traditional trapezoidal frustum/projection map rather than a projected cone like VXGI does...If I'm reading that right?

Scali wrote:

No, they approximate it.
Firstly they sample a lot less paths than there would be photons in the real world.
Secondly, the physical lighting models are simplified (and in fact, we don't even understand light in all detail yet, so we couldn't make a fully correct model even if we wanted to).

True, they do approximate it. Yes I will eat all of my words on that. And you are totally right, there are 'possibly'/'probably' many more light paths which should be considered, but are not because the image is 'adequately faked'... yes yes...I munching my hat now too. There are a few phenomenems in light that we still need to understand (including, what is it? Are you a wave? are you a photon? depends how I observe you and under what understanding we have in the domain)... raycasters obviously assume 'Photon' paths (not really considering actual propagation through a medium or vector fields/particles), raymarchers (aka volumetric/voxel) 'can' model this propagation through a medium so they essentially model photon paths and photon payload...(more importantly accurate attenuation and this affect on various payloads during propagation). Photons can also be distorted or 'diffraction/refraction' which is an even moar accurate approximation of the light behaviour.... ultimately though yes it is a faked approximation built on our current understanding o.0...

However, I will point out... irrelevant of how the various calculations are accelerated (if at all)...a lot of methods for approximating realistic lighting/lux are pointing towards brute force 'ray-casting/marching' methods of some description (be it ray-traced, path-traced, voxel-traced/marching) or emulating it...which is essentially what a rasterizer does. But yes I was wrong to bash them. I had a few bevvys, feeling nostalgic, hot day, and had single pass rastering in my mind o.0.

Scali wrote:

Well, get yourself a PowerVR GPU then. They are already doing raytracing: https://www.imgtec.com/powervr/ray-tracing/

Been there... they don't do it so much anymore unfortunately (although they do still own Caustic)... I rode that crest for the short time it was breaking (was only out for about 2 years, maybe realistically less than a year). 'Back int fall of 2013' I implemented a PoC using OpenRL. I used it to implement a BDPT (bi directional path tracer) with 'various hacks...ahem optimisations', end goal was a quick photo realistic image of the working design (as always). Usually couple hundred thousand polygons, but potentially in the millions depending on the organic noisy nature of digitized data present in a lot of working scenes. Decent renders were in the 'seconds', results akin to large CUDA core boards..but without any CUDA/OpenCL kernels required.

Here is one of my R2500 boards...

The attachment R2500.jpg is no longer available

I have two, and yes they scale! apparently (although I was CPU bottlenecked by just one on my workstation at the time 🙁). Note the blinkenlights top centre of the board! knight-rider style startup, and then iterating consecutively when all rays in the buffers have been cast 😀, so not only does it give as sense of the 'FPS', it also gives me a reason to buy a Perspex case.

16GB on board o.0 (more than a titanX!), and 2x RTU (Ray-Tracing-Unit... not *G*PU! no nVidia! you should know better Scali 😉 ... and no on board shader execution either, while we are on the subject). however, a lot of the memory is taken by the internal acceleration structures employed (and you have no control over them, so its not like that's 16GB available to the user). but it could take some large models with fast results.

It is essentially hardware accelerated ray-casting, supplemented by a shader language RLSL (YASL 😵)....As the name implies... it's like OpenGL... OpenGL 4.3, with some seasoning of 4.5 architecture (context usage/switching)). Geometry would be uploaded to the boards buffers via OpenRL buffers (primitives/vertices), upload the shaders (a program for each primitive, which consisted of 'vertex shader' shaders and 'ray shader' shaders) and a 'frame shader' (which defined the camera projection, also jittered and could be used for lens effects). These shaders were compiled and run via llvm and executed on the CPU. And there is a bit of the problem, because although the heavy intersection test is accelerated (and many of them), ultimately the general purpose shader is CPU bottlenecked. And mucho shader execution required.

I wasn't entirely convinced by RLSL (limitations in ability to straight forward concepts in some cases), and so opted to do a sort of 'deferred' rendering using the ray casting capabilities of the card to calculate the intersections, normals and other metrics for my g-buffer, then rendered each bounce off-screen. I could then combine these in CPU land, but not through the RLSL framework (which I couldn't do BDPT with directly). I'm sure this shader execution could be siphoned off as compute kernels thus freeing the CPU, and the stalls whittled out...I never got this far though before having to move to another project...when I was allowed to return, OpenRL had been deprecated o.0.

Imagination did tell me it was a PoC from their point of view, present on their PowerVR road map, the technology was to make its way into PowerVR chipsets (but I'm unsure if it ever did). Given recent Apple/Imagination relations, and the Vulkan band-wagon they jumped on (Vulkan maybe not even have been a glint in the milkman’s eye in 2013) I'm unsure how this road map is going right now. o.0

Last edited by spiroyster on 2017-06-22, 09:36. Edited 2 times in total.

Reply 26 of 33, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
agent_x007 wrote:

1) How much does Photogrammetry help with rendering quality vs Raytracing (I mean, effects can look really nice to me) ?

You don't need to calculate the surface lux for each pixel, since most of this diffuse + Lambertian (colour + cosine law applied) information is supplied already via the colour photographic information of the slides that make up the model. View dependant 'shading' calculations (specular reflection etc) still need to be done at run-time, but the majority of direct lighting calculations including diffuse interreflection (close proximity colour bleeding) is all already there so doesn't need 'shading'. Think of it a bit like baking... with the baked texture being photographic and thus contains colour, direct and indirect lighting in one map.. and not just say shadow map (direct illumination). Essentially most of what 'light-based' renderers attempt to model. So simple display textured/vertex coloured polygon.... But at the same times it's not baking because it could all be in just one map, which means it's basically a single highres texture..

Downsides are:
* The environment lighting is static, and cannot change since the effect relies on the actual lighting conditions when the photos were taken.
* The geometry is static (again captured when the photos were taken) ... any attempt to reposition this geometry or transform it would probably result in odd looking, out-of-place, lighting artefacts... with GI (indirect illumination), all visible surfaces from any given point on a surface in the scene *potentially* affect illumination at that point. Arguably in most cases, any contribution from those visible surfaces might have zero contribution (and in most cases do) so not using GI has been satisfactory, and any attempt to model GI (be it AO etc) almost always produces more realistic results (al be it to a limited degree).
* Since we are capturing light in the photo...sometimes you can get visual feedback from equipment used to capture (especially with reflections), which its prescence also technically affects the environmental light...Heisenberg....in order to observe, we must interact with the environment which will have an effect of what we wish to be observing. If the aim is just to attempt to capture the topology, then this is not so much an issue.
* Textures would have to be high res (and are of limited resolution), and filtering would take place for drawing pixels that fall on texel boundaries. Can't be avoided, but under certain situations manifests and can look a bit stiched sometimes.

A number of years ago I had experience with a digitiser which scanned at a high enough resolution to be able to use 'colour per vertex' instead of low-poly with a texture. Almost Photogrammetry since it was essentially a scanner/digitiser but scanned colour (although since flat bed scanners can take long exposure high res images, it could be argued scanners are 'photo'graphic).. Photogrammetry would use photographs as reference to construct model and so employs different processes to do this than just digitising from scanning.

but yes, its a tool to visualise objects/environments/objects in environments, which produces very good results, and can be used with different rendering techniques. It has merit, can be rendered fast, but is also limited.

Scali wrote:

It's just a tool that you can use with a variety of rendering techniques.
Certain games develop their 3D characters by building an actual clay model or such, and then scanning it for the geometry.
Does motion capture also fall in that category? It is used a lot in games.

It's not motion capture directly... Although with enough camera position around the subject, motion could be captured. It would only make sense to digitise the objects geometry each captured time slice if the subject geometry itself changed... e.g photographing the motion of fluid and digitising both geometry and photometric material properties of the motion. But this may produce limited results if one of the intentions is to capture effect of the environmental lighting has on the material and surface of the subject(s) given the feedback from the equipment for capture (possibly occluding light sources etc).

Matrix-'bullet-time' basically.

Reply 27 of 33, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:

Ok, you may be right here. Yes that is an impressive resume (and has been annually for a while). This isn't my industry, but I do have acquaintances in this industry (whom I'm sure I have had this conversation with before?), and after quick discussions with some of them.. It does appear, while the tend to favour various tools for modelling and animation, a lot (all 3 actually...yeah I know massive number) are using, or have recently used RenderMan.

It would help if you checked out the history of RenderMan.
It is now Pixar's tool... But Pixar was split off from ILM earlier (it was the graphics group of ILM), and ILM was initially set up by Lucasfilm for some of the first CGI effects ever in the Star Wars movies.
Basically RenderMan (or at least its predecessors) was the first ever cinematic CGI software package. It set the standard, and still does today.
It's not a modeler or animation package though, it's a renderer. Pixar generally uses Maya for modeling and animation.

spiroyster wrote:

Arp, I've heard of those, but not really looked into application of it. It looks like some kinda of 3D spatial radiosity using voxels instead of surface patches? I wonder how it would handle material transmission and transparency, which is traditionally Radiotisty's Achilles heel (my google fu fails to bring up any translucent/transparent surfaces rendered with VXGI, so currently I'm guessing it does suffer o.0). If it ray-marches it, It would probably work, If it rasters it, no ... unless it employed some kinda of per-voxel/per-projected-pixel operation or something? ... in the comments of your first link, nVidia say cone, so no casting.

While on the subject of raytracing on nVidia hardware..they have had their own Optix raytracer, which has been going for years. Early optix demos, while visually impressive, weren't much faster (and slower in some cases) than CPU home grown 'ports' of the demos. This was a while ago now (like 2008/09..I can't remember, but it was pre 2010 iirc)... a lot has probably changed, and Iray (nVidia physically based renderer) uses Optix. Results of VXGI do look impressive.

The thing with VXGI is that it is completely realtime. So comparing with something like Optix doesn't make any sense.
And it's a rendering method, you can do any number of things with it. Basically you first render your geometry (which can be anything, not just objects, but also volumetric light sources or participating media such as smoke particles) into a 3d voxel, and then you use cone tracing to see which voxels you hit. Then you can use whatever lighting equation you like to process the data.
I think this is something we could be seeing a lot in the next generatino of games (in fact I'm surprised why not more games use it already. The only game I know that currently uses it is Rise of the Tomb Raider, but only for AO).

spiroyster wrote:

Yeah, I was trying to imply that I think a lot of these maps would be generated from 'ray-casting' (be it raytracing - from the eye, path tracing/photon tracing - from the luminaires or raymarching - volumetric ray casting...or even radiosity). The maps themselves could be the result of multi-bounce calculations... once you have these maps, the frame can be rasterized or raytracer. But if the maps are all constructed with raycast methods, simply using just rasterization to present the final image means the rasterizer is nothing more than a glorified blitter? But yes you could rely on rasterization to generate these shadow maps...I guess even soft area lit shadows... we need to leave point lights out though, since these don't exist IRL so no reason to include them in 'realistic' rendering...

Historically shadowmaps were done by rasterizers though, because of the massive speed advantage. In fact, RenderMan is responsible for popularizing shadowmaps.
I'm talking about actual shadowmaps of course, updated every frame. Not any static 'pre-baked' lightmaps like in old games such as Quake.
Shadowmaps are basically a 'view' of the scene as a light sees it, basically a z-buffer, telling you which pixels are closest to the lightsource. You reproject this 2d bitmap over the scene so that you can compare the pixels that your camera sees to the pixels that the light sees, to determine whether they are in light or shadow. For directional lights, a single 2d bitmap will do. For omnidirectional lights, you can use cubemaps.

spiroyster wrote:

For diffuse interreflection, there is most certainly a second bounce required.

That depends on the rendering algorithm you're using, does it not?
I mean, if you use photon mapping, then you don't need a second bounce during actual rendering of the scene. You'll only need multiple bounces during the actual photon tracing phase. But in that phase, you're just tracing the photons, you're not actually rendering pixels and not evaluating the shading equation yet.
In the actual rendering pass, you can simply do lookups into the photon maps after the first bounce, since all the diffuse interreflection has already been performed during the photon tracing phase.

spiroyster wrote:

And its this secondary (higher order) bounce that a rasterizer cannot do without setting up a new view projection from the view point of the intersection.

Nobody ever claimed that using a rasterizer was a good idea for anything other than the first bounce.
Point is just that it's a much better idea for the first bounce than a raytracer, so hybrid rendering is the way to go.

spiroyster wrote:

Imagination did tell me it was a PoC from their point of view, present on their PowerVR road map, the technology was to make its way into PowerVR chipsets (but I'm unsure if it ever did). Given recent Apple/Imagination relations, and the Vulkan band-wagon they jumped on (Vulkan maybe not even have been a glint in the milkman’s eye in 2013) I'm unsure how this road map is going right now. o.0

Well, based on the link I sent, it made it into their latest GPUs. And the idea is once again hybrid rendering: first rasterize the first bounce, then you can shoot off rays for additional effects. I assume you won't need to go back to the CPU to actually shade those, but can just use the GPU's shading capability.
In fact, why would you use the CPU anyway? Wouldn't it have been better to let the RPU do its magic into some buffers, then feed the buffers to a GPU, and perform deferred rendering there? Would be oodles faster than a CPU.

Last edited by Scali on 2017-06-22, 13:11. Edited 1 time in total.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 29 of 33, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

It would help if you checked out the history of RenderMan.

Indeed, tbh outside of pixar, thats what I thought it was ... history o.0

Scali wrote:

...in the Star Trek movies.

Uh-oh... me thinks you're gonna be getting a lot of hate mail from <whoever>@starwarsappreciationsociety.tatooine 😀

Scali wrote:

The thing with VXGI is that it is completely realtime. So comparing with something like Optix doesn't make any sense.

I wasn't comparing it to Optix, just mentioned Optix because its an 'alternative' (nVidia) method which is hardware accelerated... and in many cases, Optix can be real-time (scene complexity/algorithm dependant of course).

Scali wrote:

I think this is something we could be seeing a lot in the next generatino of games (in fact I'm surprised why not more games use it already. The only game I know that currently uses it is Rise of the Tomb Raider, but only for AO).

Agreed, tbh this is why I sometimes shy away from the marketing demos. While impressive, they are always 'canned'. Then, given the lack of examples that seem to actually apply them in all their glory... it makes me wonder if investing time and resources into it is worth it 🙁. The particle/fluid demos they do are truly excellent and likewise VXGI vids.

Scali wrote:

Historically shadowmaps were done by rasterizers though, because of the massive speed advantage. In fact, RenderMan is responsible for popularizing shadowmaps.
I'm talking about actual shadowmaps of course, updated every frame. Not any static 'pre-baked' lightmaps like in old games such as Quake.
Shadowmaps are basically a 'view' of the scene as a light sees it, basically a z-buffer, telling you which pixels are closest to the lightsource. You reproject this 2d bitmap over the scene so that you can compare the pixels that your camera sees to the pixels that the light sees, to determine whether they are in light or shadow. For directional lights, a single 2d bitmap will do. For omnidirectional lights, you can use cubemaps.

Yeah, I was thinking more along the lines of baking. well assumed.
Out of curiosity, how would do real-time soft shadows? Or even generate your real-time soft shadw maps?

Scali wrote:

That depends on the rendering algorithm you're using, does it not?

True. But for diffuse interreflection?? The only attempt I know about is SESSAO (https://forum.unity3d.com/threads/sessao-high … r-bleed.323511/)..while the colour is there... it has that same loss of illusion that I find with SSAO (subjective ofcourse). Are there any others?

Scali wrote:

I mean, if you use photon mapping, then you don't need a second bounce during actual rendering of the scene. You'll only need multiple bounces during the actual photon tracing phase. But in that phase, you're just tracing the photons, you're not actually rendering pixels and not evaluating the shading equation yet.
In the actual rendering pass, you can simply do lookups into the photon maps after the first bounce, since all the diffuse interreflection has already been performed during the photon tracing phase.

Yes, during actual image generation this can be rastered. Generally, with photon mapping, multiple maps tend to be used (for stuff such as caustics). This is fine for static geometry because you could say freely move (in real-time) around the scene (except view dependant effects like specular would still need to be done during drawing), the moment anything in the scene moves, while many photons can be re-used...to be safe we need to assume that the maps are dirty and ALL photons should be retraced... busy spinny blue aerobie mouse pointer (not real) time.

https://www.keyshot.com/ is a great implementation of this... incidentally originally written by Henrik Jensen (father of photon mapping!)

Scali wrote:

Nobody ever claimed that using a rasterizer was a good idea for anything other than the first bounce.

True, I might have made that up and subsequently angered myself over it o.0.

Scali wrote:

Point is just that it's a much better idea for the first bounce than a raytracer, so hybrid rendering is the way to go.

Yes given some of the tech mentioned above Hybrid would appear to be 'the way'. The OpenRL SDK includes a hybrid example which uses OpenRL to calculated real-time raytraced shadowmaps. The results of which weren't entirely convincing to me. This may have skewed my earlier views on Hybrid rendering...but those nVidia piccys do make me warm.

Scali wrote:

Well, based on the link I sent, it made it into their latest GPUs. And the idea is once again hybrid rendering: first rasterize the first bounce, then you can shoot off rays for additional effects.

Unfortunately, OpenRL (and subsequetly RLSL) didn't join them. Maybe it was incorporated into PowerVR SDK somehow? idk. I have seen PowerVR boards that plug into computers (i.e PowerVR on PC...again), but nothing that seem to make it past working concepts. I don't think they push their SoC to PC OEM's to much? or everybody who might be interested, already have their own. PowerVR is strictly mobile.

Scali wrote:

I assume you won't need to go back to the CPU to actually shade those, but can just use the GPU's shading capability.
In fact, why would you use the CPU anyway? Wouldn't it have been better to let the RPU do its magic into some buffers, then feed the buffers to a GPU, and perform deferred rendering there? Would be oodles faster than a CPU.

Yes, I assumed CPU would be balancing/scheduling only, if the project ever got to that state. I also think there was scope to have additional intelligence to mitigate stalls and strategically weight the detail required or prioritise type of shader to be executed (something which might have needed more CPU usage, idk). For whatever reason, I didn't get that far 🙁

Last edited by spiroyster on 2017-06-22, 12:43. Edited 1 time in total.

Reply 30 of 33, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
spiroyster wrote:

It's not motion capture directly... Although with enough camera position around the subject, motion could be captured.

You don't need all that many cameras. Just look at Kinect.

Very true, and a remarkable bit of kit that was, when it came it (and still is tbh). Does have black spots though, detailed topology might be asking a bit too much of it o.0.

Reply 31 of 33, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:

Out of curiosity, how would do real-time soft shadows? Or even generate your real-time soft shadw maps?

There are various ways.
One way is basically just what you'd normally do in a raytracer: do jittered sampling, and then average it out.
There's also things like 'variance shadow maps': http://www.punkuser.net/vsm/
Basically the idea is to store more information that just the depth, so that you can perform more advanced sampling/filtering than just 'in light'/'in shadow'.

spiroyster wrote:

https://www.keyshot.com/ is a great implementation of this... incidentally originally written by Henrik Jensen (father of photon mapping!)

Yup, a friend and I wrote a photon mapper some 15 years ago, mainly based on the publications of Henrik Wann Jensen.

The attachment SS3.jpg is no longer available
The attachment SS4.jpg is no longer available

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 32 of 33, by devius

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

Yup, a friend and I wrote a photon mapper some 15 years ago, mainly based on the publications of Henrik Wann Jensen.

If that's realtime then it's very impressive. If not, it's still good 😀

Reply 33 of 33, by Scali

User metadata
Rank l33t
Rank
l33t
devius wrote:

If that's realtime then it's very impressive. If not, it's still good 😀

It was 15 years ago... we still had single-core CPUs, I think in the range of 500 MHz... so no, not exactly realtime (wasn't our aim anyway).
Depending on the quality you wanted, it could take from seconds to minutes per frame.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/