VOGONS


First post, by Scali

User metadata
Rank l33t
Rank
l33t
Reputator wrote:

The R100 was even more capable in its pixel shading features, apparently just falling short of the original DX8.0 spec, right?

So they say.
The theory was that 'SM1.0' was meant to be the original Radeon. And you do wonder... GF3 supports VS1.1 and PS1.1. What happened to 1.0?
I suppose we'll never know.
I never used an R100 card myself, and I'm not sure what it is exactly that it could do.

Reputator wrote:

Your demos are very impressive. They look better than NVIDIA's own per-pixel GeForce 2 lighting demo.

Thanks.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 1 of 33, by swaaye

User metadata
Rank l33t++
Rank
l33t++

I think 3dfx Rampage was built around whatever "pixel shader 1.0" is.
http://ixbtlabs.com/articles/3dfxtribute/index.html

The original Radeon had something like NVidia shading rasterizer / register combiners (NV1x) but I read an article years ago that described how NV's solution was more useful in the end. I wish I could find that article again but I have no idea what site it was on. On the other hand Radeon could perform EMBM whereas only NV20 onward can do that.

Reply 2 of 33, by Scali

User metadata
Rank l33t
Rank
l33t
Reputator wrote:

EMBM seems to be another case of infamous "cap bits", in which its support goes back to DX6 but if and when it ever became a minimum requirement, I'm unsure about. Clearly NVIDIA could exclude support for it and still claim full DX6 and 7 compliance, but Matrox and ATI went beyond those requirements.

Prior to D3D8 there were no minimum requirements.
D3D7 and lower can all run on any legacy drivers for any D3D card (any DDI version), and any functionality is exposed via caps bits.
I made an overview of this some years ago: https://scalibq.wordpress.com/2012/12/07/dire … -compatibility/

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 3 of 33, by Reputator

User metadata
Rank Member
Rank
Member
Scali wrote:

Prior to D3D8 there were no minimum requirements.
D3D7 and lower can all run on any legacy drivers for any D3D card (any DDI version), and any functionality is exposed via caps bits.
I made an overview of this some years ago: https://scalibq.wordpress.com/2012/12/07/dire … -compatibility/

A lot of really good information in there. Thank you!

https://www.youtube.com/c/PixelPipes
Graphics Card Database

Reply 4 of 33, by swaaye

User metadata
Rank l33t++
Rank
l33t++
Reputator wrote:

But obviously that doesn't imply a programmable shader even if its use seems to be more commonly associated with DX8 and up. The extent to which R100's shaders were actually "programmable" will probably remain a mystery, and per Scali's quote seems to be something early drivers dabbled in, but have since buried.

ATI called their architecture "pixel tapestry" so if you find documents about that you can learn what it can do. For example,
https://web.archive.org/web/20010204033400/ht … /techspecs.html

Maybe Scali can translate what the "pixel shader" aspect might be capable of.

Reply 5 of 33, by Scali

User metadata
Rank l33t
Rank
l33t
swaaye wrote:

Maybe Scali can translate what the "pixel shader" aspect might be capable of.

Well, if this is an accurate description of what the GPU is capable of (as opposed to what functionality is exposed under the D3D API), then I suppose what was said in the quote above is quite true: 99% of the functionality can be done with the fixed function pipeline (not sure where that 1% goes though).
As the quote tried to explain, the term 'shader' was not strictly defined in the days before D3D8. The name was originally promoted by Pixar's RenderMan software, but their shaders are way different from what you got in D3D8.
So as we moved from simple 3D accelerators to GPUs and per-pixel lighting, I can see why they wanted to use the term 'shader' to draw similarities with RenderMan (which was, and still is, the benchmark of CGI).
When Microsoft started using the term 'shader' however, they used the term as part of a standardized API and programming model, so in that context, your hardware didn't just have to be capable of shading pixels, but it had to have hardware functionality and drivers that were compatible with this API and programming model.

So what the R100 had:
- Fixedfunction T&L (including vertex skinning and blending)
- Cube mapping
- 3D textures
- Emboss
- DOT3
- EMBM
- Projective texturing
- Shadow mapping

All of that is possible with the fixed function pipeline in D3D.
The 'programming' they describe is a very simplified programming model, which is basically how the D3D fixed function pipeline works:
Your hardware supports a number of 'texture stages' per pass (in the case of the R100, it supports 3 stages). For each pass, you can specify two texture operations: one for the RGB channel, and one for the alpha channel. For each operation you specify two source and one destination register.

It's no different from what I did with my GeForce2 demos. The difference is that the R100 is even closer to the capabilities of a ps1.1-GPU than the GF2 is. Namely, the GF2 only has 2 texture stages, and lacks some features. It doesn't have EMBM, and I don't think it had 3D texture support either.
The Kyro II is also an interesting card, it actually offered 8 texture stages (and both EMBM and DOT3).

Things that are glaringly missing from the R100 compared to the GF3:
- ps1.1 requires 4 texture stages
- GF3 supports programmable vertex shaders, vs1.1

Those vertex shaders are really all-important if you truly want to use per-pixel lighting with DOT3. Namely, all the fixedfunction T&L is focused on per-vertex lighting only. If you want to use DOT3, you need to set up the per-vertex data differently, because you will be interpolating actual normal vectors, rather than ARGB gradients (funny detail is that you're interpolating them linearly, so they may get denormalized in the process. The trick was to use a renormalization cubemap. It wasn't until ps1.4 that we actually had an instruction to normalize vectors).
So for the per-pixel lighting I did, I had to at least partly bypass the T&L pipeline and use the CPU instead, to set up the geometry for DOT3 lighting.
You would need to do the same on the R100.
I developed a 'hybrid' system though: I used dynamic vertexbuffers, and my CPU routines would simply update only the per-vertex normals to be interpolated. Everything else could be calculated by the fixedfunction T&L. So I reduced the CPU-load to a minimum, and still tried to maximize the benefit of accelerated T&L. It worked quite well, especially since the per-pixel lighting allowed bumpmaps for detail, so you didn't need very high-poly geometry.

Other than that, my recollection of ps1.1 is quite rusty, but I don't recall it being capable of much more than just DOT3, EMBM and 'traditional' texturemapping (with applying a texture matrix). All of which you can do with fixedfunction really. I suppose ps1.1 mainly allows you to do some additional arithmetic aside from the texture-fetches. Fixedfunction really only allows one texture operation per texture stage, so if your hardware had 3 texture stages, you could execute 3 instructions in one pass. ps1.1 allowed 12 instructions and 4 texture reads per pass.

For comparison, here you can find all ps1.x instructions:
https://msdn.microsoft.com/en-us/librar ... s.85).aspx
And here you can find all D3D fixed function texture operations:
https://msdn.microsoft.com/en-us/librar ... s.85).aspx

The difference is minimal.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 6 of 33, by Reputator

User metadata
Rank Member
Rank
Member

I think you may have established the definitive answer, once and for all, on what the mystery behind the R100 "shaders" were. It only took 17 years!

Thanks Scali!

https://www.youtube.com/c/PixelPipes
Graphics Card Database

Reply 7 of 33, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

...heavy shit...

Is that why the phrase 'shader' is used for vertex manipulation?

For me, shaders seemed a quick fudge to leverage per-vertex/pixel operations out of the fixed function pipeline. Considering the pipeline is essentially a black box, you put vertex/material info one end, and get an image out of the other... aside from the various flags/states you could set, there was no control within the pipeline over individual entities (be it vertices, or pixels), which meant you needed to wait for the output buffers and any combining of their data had to be done in CPU land (outside the pipeline) if what you required fell outside of the basic texture combining/blending exposed in the API. The fact that this functionality was applied to vertex & pixel data first was logical progression...these were the areas that really needed this extra 'stage' in the pipeline... a stage that was 'partially' user programmable to save a myriad of hardware dependant extensions that may or may not be present.

I always thought someone at ATi didn't get the memo, and so used the phrase 'correctly' for a fragment/pixel shader, and then 'incorrectly' for vertex shader (because a vertex shader's rationale was similar to the pixel shaders rationale in terms of pipeline breakage), but it makes sense to use 'shader' for the vertex operation, since part of the DOT3 (which IS 'shading') would require manipulation of the normal data?

Still 'Compute' shader ??? o.0 wtf ... no excuse, bad name. I've always preferred the word 'kernel'.

Fun fact: nVidia own the patent on accelerated bump mapping as part of the graphics pipeline (I think they got it from 3dfx). Not sure how ATi could do their 'pixel tapestries' (or any other graphics card manufacturer for that matter) without violating nVidia's patent? Or am I missing something here?
https://www.google.com/patents/US6297833

Reply 8 of 33, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:
Scali wrote:

...heavy shit...

Is that why the phrase 'shader' is used for vertex manipulation?

Well, I don't know if you're familiar with RenderMan, but it uses what is called a REYES-renderer.
Basically it means that everything is rendered with high-order surfaces (NURBS), tessellated down to sub-pixel level (micropolygons). Then each polygon can be 'splatted' to a pixel directly. So basically there is no actual vertex processing or polygon rasterizing at all. There's just the subdivision algorithm, and the resulting polygon can be rendered as a single pixel. So basically it's vertex and pixel processing rolled into one.
That's why RenderMan allows you to express the shading for a material in just a single 'shader' program. It is just one operation, performed at the end of the pipeline.

With 3D acceleration hardware, initially all the shading was done at the polygon or vertex-level, because it is far more efficient for low polycounts. Eg, for flat and gouraud-shading, there isn't any actual lighting/shading being calculated at the pixel stage. For flat-shading, the colour is just a constant over the entire polygon, and for gouraud-shading, the colour is interpolated linearly by the rasterizer (just like the per-pixel z-values for the zbuffering). The pixel 'shading' was little more than just modulating the texture with the colour that was fed from the rasterizer stage.

So in that situation, it makes more sense to use the term 'shader' for the vertex processing than for the pixel processing.
But as the pixel processing got more advanced, eg with the capability of doing DOT3, you could calculate actual Lambertian terms in a lighting equation at the pixel stage.
So you started to do some actual lighting/shading calculations per-pixel. Unlike REYES rendering you still had to do a lot of setup at the vertex-stage though. So basically your 'shader program' for your material was split up over two parts of the pipeline: the vertex shader and the pixel shader.

spiroyster wrote:

For me, shaders seemed a quick fudge to leverage per-vertex/pixel operations out of the fixed function pipeline.

The first generation of shaders was mostly about the per-vertex operations. Instead of a hardwired T&L state machine, you now had a powerful SIMD FPU which could run a program at every vertex, allowing you to put all kinds of values into the rasterizer for interpolation, and feeding it to the pixel stage.
In fact, there was hardware on the market that supported vs1.1, but no pixel shaders. I have an old laptop with a Radeon IGP that is like that. The pixel backend was basically an R100 fixedfunction pipeline. This was a very powerful setup still, because the vertex shader allowed me to do all kinds of things like vertex skinning animation, setting up shadow volumes, preparing normalmap tangentspace etc. The R100 backend was then powerful enough to perfom the per-pixel lighting with DOT3. Not quite a GF3, but not too far off.

spiroyster wrote:

Still 'Compute' shader ??? o.0 wtf ... no excuse, bad name. I've always preferred the word 'kernel'.

Depends on how you look at it, I suppose. By the time compute shaders came around, our GPUs had evolved from 'vertex shader units' and 'pixel shader units' to 'unified shader units'. So at the hardware level there was no difference anymore. There was just a pool of shader units that could be allocated to either vertex or pixel operations. In fact, by that time we also had the geometry shader already, and the domain and hull shader were also introduced for programmable tessellation.
In that context, 'compute shader' is not that strange, because you're basically using the same shader hardware, but you are simply bypassing the pipeline. And at least in some cases, you are actually performing shading-related calculations still.
But that's how I see it anyway, you are using the 'shader hardware' to perform generic 'computations' (as in, outside of the context of a geometry pipeline).
And 'compute shader' is a D3D-specific term. Within the context of that API it makes more sense than elsewhere, since the API for compute shaders is mostly the same as for any other type of shaders, and you use the same HLSL programming language and compiler for these shaders.
With OpenGL it's a different story. There's no compute shader in the API. You use the completely separate OpenCL API, and the only 'communication' you have between the APIs is some OpenGL extensions to share buffers.
I don't think people in the OpenCL world actually call them 'compute shaders' anyway.

spiroyster wrote:

Fun fact: nVidia own the patent on accelerated bump mapping as part of the graphics pipeline (I think they got it from 3dfx). Not sure how ATi could do their 'pixel tapestries' (or any other graphics card manufacturer for that matter) without violating nVidia's patent? Or am I missing something here?
https://www.google.com/patents/US6297833

I think what you're missing is that the other GPU manufacturers also have a collection of similar patents, so nobody has a 'monopoly' on building GPUs. They have cross-license agreements.
In fact, Microsoft makes it an issue in D3D to only include functionality that all GPU vendors can support. You generally will not see functionality in D3D that is exclusive to a single vendor.
At best you'll see functionality that is currently only supported by a single vendor, but will be included by other vendors in future iterations of their hardware as well. ps1.4 is a fine example of that: The only 'real' ps1.4 hardware is the Radeon 8500 and derivatives. But all future GPUs have to support it.

Another example would be conservative rasterization: at first only NV supported it. Now Intel also supports it (actually a higher tier than NV does), and at some point in the future, AMD will as well.

Example from the other side would be tessellation: ATi had been playing around with a prioprietary tessellation implementation for years, but never found support, because other vendors were unwilling to implement it.
When DX11 arrived, hardware was at a point where you could finally implement significantly programmable tessellation with good performance. So at this point a proper spec was standardized. Ironically enough, AMD's implementation turned out to be sub-par when NV released its Fermi-architecture. NV's first architecture since the early high-order surface support in GF3 to have any form of tessellation.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 9 of 33, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:
Well, I don't know if you're familiar with RenderMan, but it uses what is called a REYES-renderer. Basically it means that every […]
Show full quote

Well, I don't know if you're familiar with RenderMan, but it uses what is called a REYES-renderer.
Basically it means that everything is rendered with high-order surfaces (NURBS), tessellated down to sub-pixel level (micropolygons). Then each polygon can be 'splatted' to a pixel directly. So basically there is no actual vertex processing or polygon rasterizing at all. There's just the subdivision algorithm, and the resulting polygon can be rendered as a single pixel. So basically it's vertex and pixel processing rolled into one.
That's why RenderMan allows you to express the shading for a material in just a single 'shader' program. It is just one operation, performed at the end of the pipeline.

With 3D acceleration hardware, initially all the shading was done at the polygon or vertex-level, because it is far more efficient for low polycounts. Eg, for flat and gouraud-shading, there isn't any actual lighting/shading being calculated at the pixel stage. For flat-shading, the colour is just a constant over the entire polygon, and for gouraud-shading, the colour is interpolated linearly by the rasterizer (just like the per-pixel z-values for the zbuffering). The pixel 'shading' was little more than just modulating the texture with the colour that was fed from the rasterizer stage.

So in that situation, it makes more sense to use the term 'shader' for the vertex processing than for the pixel processing.
But as the pixel processing got more advanced, eg with the capability of doing DOT3, you could calculate actual Lambertian terms in a lighting equation at the pixel stage.
So you started to do some actual lighting/shading calculations per-pixel. Unlike REYES rendering you still had to do a lot of setup at the vertex-stage though. So basically your 'shader program' for your material was split up over two parts of the pipeline: the vertex shader and the pixel shader.

Yes fairly-ish familiar with REYES (at least concepts, not looked at it in detail for many years though), I wrote a rib importer once for my raytracer (albeit limited support). This is where I get the confusion because 'shade' implies lighting calculation.. granted this depends on the topology, traditionally in raytracers all geometry setup (modelview transformed points, normals and texture transforms) are done prior to casting (aka data upload/scene description) since all casting is ultimately done is world-space. I will add I was late to the GL shader party, and my first introduction to vertex shaders was more along the lines of surface manipulation (including, and not just, normal distortion)... nice siney wavey water with lurvly phong (I may have had the luxury of 'relatively speaking' flexible shader model when I started, certainly didn't realise how restrictive early standards were o.0). It was this concept of transforming vertex positions via a programmable shader (within the 'pipe'), something which I would have thought should be done as a part of the scene/primitive definition before rendering.

With vertex shaders, there was reduced overhead of vertex/scene data upload since this wasn't CPU bound as the vertices can be manipulated from entirely within GPU. It was this reason that I thought they kinda evolved from?... I have just had a bit of a penny drop moment and can see the correlation between the 'vertex shader' and shading, if approached from 'using the vertex shader to distort normals for bump mapping' for later per-pixel processing.

In REYES, lighting calc is done in the shader, the subdivision/micro-faceting is a by-product of the algorithm, which was optimised for speed given the hardware at the time (there was even a pixar card http://forums.nekochan.net/viewtopic.php?t=16729703 ... yep dedicated, scalable, hardware accelerated 'raytracing' back in 1990 o.0 [EDIT: Actually I think this extended texture memory rather than accelrate intersection tests...I'm confusing that with later raytracing accelrators 🙁.. My bad], the acceleration structures to speed up ray/primitive intersection tests handled all the geometry (the shader did not have control over the geometry which was already defined?...although it could of course decide to offset the intersection point, or perhaps fudge the normal how it saw fit...but this wouldn't be persisted back to the model data for other shaders to be executed from (problematic when calculating bounces off normal displaced topology since the displacement is calculated on a per-shader-execution basis, subsequent intersection tests need this offset defined within the scene geometry to save recalculating it themselves)? and only allowed subdivision/evaluation of the surface rather than manipulation. At least that was how I remember it ... however I have just looked up the shader functionality in current Renderman, and it does indeed include Deformation/Displacement 'shaders'. Which I guess could be pretty much covered by a GL/D3D vertex shader (functionality-wise).... meh I dunno.. I kinda talking myself out of my previous thought chain now o.0.

Certainly in raytracing land, the bottleneck (and thus optimisations) are all due to vast numbers of ray-whatever intersection tests (pretty heavy, as repetitive computations go). The 'shadable' (generic kernel) stuff is exposed through 3 types of program (at least ime, these three types of shader have been settled on):
camera/projection kernel. Spawn rays from pixels... sampling, jittering etc is done.
material shader kernel. BxDF's, occlusion tests (shadow casting), subsurface approximations etc.. and ultimately more rays can be spawned.
path kernel. Which is used to deduce more rays to spawn/kill or calculate 'connections' (path tracing) between existing intersection points.

And in engines which handle multiple geometry types, the intersection/evaluation test itself could be considered a 'shader', perhaps needing to be user definable for extending custom primitive types.

Scali wrote:
Depends on how you look at it, I suppose. By the time compute shaders came around, our GPUs had evolved from 'vertex shader unit […]
Show full quote
spiroyster wrote:

Still 'Compute' shader ??? o.0 wtf ... no excuse, bad name. I've always preferred the word 'kernel'.

Depends on how you look at it, I suppose. By the time compute shaders came around, our GPUs had evolved from 'vertex shader units' and 'pixel shader units' to 'unified shader units'. So at the hardware level there was no difference anymore. There was just a pool of shader units that could be allocated to either vertex or pixel operations. In fact, by that time we also had the geometry shader already, and the domain and hull shader were also introduced for programmable tessellation.
In that context, 'compute shader' is not that strange, because you're basically using the same shader hardware, but you are simply bypassing the pipeline. And at least in some cases, you are actually performing shading-related calculations still.
But that's how I see it anyway, you are using the 'shader hardware' to perform generic 'computations' (as in, outside of the context of a geometry pipeline).
And 'compute shader' is a D3D-specific term. Within the context of that API it makes more sense than elsewhere, since the API for compute shaders is mostly the same as for any other type of shaders, and you use the same HLSL programming language and compiler for these shaders.
With OpenGL it's a different story. There's no compute shader in the API. You use the completely separate OpenCL API, and the only 'communication' you have between the APIs is some OpenGL extensions to share buffers.
I don't think people in the OpenCL world actually call them 'compute shaders' anyway.

I certainly don't call them compute shaders <spits on pavement>. Yes in the past I have used CUDA, but more recently OpenCL. OpenCL is now part of vulkan unifying both these API's which tradionally could work on the same 'unified shader' hardware anyhow (like you say). I guess these are both stuck in my mind as 'GPGPU' kernels (rather than 'shaders'). Even though since they are techincally executed in the same way/same architecture that 'shaders' are. This processing power of the GPU may have once been used for 'shading', however they are no longer just that, so 'shader' is not only now an irrelavant description, but also misleading imo.

Did you ever use nVidia Cg? I never did, but I think that was an attempt at a corss-platform shading language for HLSL/GLSL?

Scali wrote:
I think what you're missing is that the other GPU manufacturers also have a collection of similar patents, so nobody has a 'mono […]
Show full quote

I think what you're missing is that the other GPU manufacturers also have a collection of similar patents, so nobody has a 'monopoly' on building GPUs. They have cross-license agreements.
In fact, Microsoft makes it an issue in D3D to only include functionality that all GPU vendors can support. You generally will not see functionality in D3D that is exclusive to a single vendor.
At best you'll see functionality that is currently only supported by a single vendor, but will be included by other vendors in future iterations of their hardware as well. ps1.4 is a fine example of that: The only 'real' ps1.4 hardware is the Radeon 8500 and derivatives. But all future GPUs have to support it.

Another example would be conservative rasterization: at first only NV supported it. Now Intel also supports it (actually a higher tier than NV does), and at some point in the future, AMD will as well.

Example from the other side would be tessellation: ATi had been playing around with a prioprietary tessellation implementation for years, but never found support, because other vendors were unwilling to implement it.
When DX11 arrived, hardware was at a point where you could finally implement significantly programmable tessellation with good performance. So at this point a proper spec was standardized. Ironically enough, AMD's implementation turned out to be sub-par when NV released its Fermi-architecture. NV's first architecture since the early high-order surface support in GF3 to have any form of tessellation.

Probably. The only reason I know about the nVidia patent is because an old lecturer used to harp on about it. iirc The first true 'bumping/faked-displacement' that could be accelerated was done via a GL_NV extension in OpenGL. At the time I did ask, and was told, that although they do own this 'technology' (not bump mapping in general, just for accelerating it...or something), it was never enforced... I wonder how many SGI patents (and others) microsoft infringed to arrive at its current incarnation of DX 😉. Poor old ATi/AMD, always had great ideas (Vulkan is basically a descendant of Mantle, idk how much this relates to DX12, but suspect the underpinning concepts are similar...removing all this CPU bound stuff, opening to 'generic-user defined' pipelines?...all within the same API), but it seems they need nVidia to always come up with the hardware o.0.

Last edited by spiroyster on 2017-06-20, 14:20. Edited 1 time in total.

Reply 10 of 33, by appiah4

User metadata
Rank l33t++
Rank
l33t++

ATi TRUFORM on R200 hardware was lightyears ahead of what the competition was doing.. To think tessalation actually became a thing only 15 years later is astounding.

truform.jpg

Retronautics: A digital gallery of my retro computers, hardware and projects.

Reply 11 of 33, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:

In REYES, lighting calc is done in the shader, the subdivision/micro-faceting is a by-product of the algorithm, which was optimised for speed given the hardware at the time (there was even a pixar card http://forums.nekochan.net/viewtopic.php?t=16729703... yep dedicated, scalable, hardware accelerated 'raytracing' back in 1990 o.0)

REYES is most definitely *NOT* raytracing though.
It's basically polygon rasterization. RenderMan promoted the use of things like shadowmapping and cubemaps for reflection/refraction effects.
There are a number of downsides to raytracing, which is why it never was very useful even for offline rendering.
Some downsides include:
- Performance is awful. Acceleration sturctures only get you so far, because they don't really work for animated geometry, especially not with NURBS-based animation.
- Controlling quality is difficult as well. Since each ray is essentially 'isolated', you can't perform any kind of texture filtering based on gradient deltas (which are basically just partial derivatives). This makes performing things like trilinear or anisotropic filtering very troublesome. Usually it's just handled with bruteforce: apply a lot of supersampling to compensate.

For some reason, raytracing is the algorithm that people associate with photo-realistic rendering and CGI in movies. But in reality most movies are done with RenderMan or similar technology, not raytracing.
The liquid metal robot in Terminator 2? RenderMan with environment-mapping. Not raytracing.

RenderMan did get raytracing as an optional 'effect', and the movie Cars was the first to feature this effect, but it was mainly used for close details of reflections, refractions and such (cars have a lot of chrome etc). For the most part they still used cubemaps.
There's a nice paper on that: http://graphics.pixar.com/library/RayTracingCars/paper.pdf

spiroyster wrote:

This processing power of the GPU may have once been used for 'shading', however they are no longer just that, so 'shader' is not only now an irrelavant description, but also misleading imo.

Using a *G*PU for anything non-graphics-related is misleading as well, so where do you draw the line? 😀

spiroyster wrote:

Did you ever use nVidia Cg? I never did, but I think that was an attempt at a corss-platform shading language for HLSL/GLSL?

What is GLSL? 😀
Microsoft and nVidia developed HLSL together. Cg is basically NV's attempt to add 'HLSL' to OpenGL as well. GLSL didn't exist yet at that time.
NV's compiler could basically compile 'HLSL' shaders to OpenGL ARB shader extensions (which were assembly-like). So it was generic for any SM2.0 hardware. For NV there were of course special extensions to the language so you could optimize for NV hardware and make full use of their shader extensions.

For some reason, instead of adopting and standardizing Cg for OpenGL (which could have saved us a LOT of trouble), they decided to (poorly) re-invent the wheel and come up with GLSL as the 'official' OpenGL shading language.
That put Cg in the same position as AMD's Mantle: nobody is going to touch it.

spiroyster wrote:

Poor old ATi/AMD, always had great ideas (Vulkan is basically a descendant of Mantle, idk how much this relates to DX12, but suspect the underpinning concepts are similar...removing all this CPU bound stuff, opening to 'generic-user defined' pipelines?...all within the same API)

Except it wasn't AMD's idea to begin with.
Consoles had low-level APIs for ages, and Microsoft and Sony developed their own APIs from AMD long before we heard anything of Mantle.
AMD basically 'borrowed' the ideas they got from MS and Sony's API and rehashed it into their own 'DX12-lite'. Then started to market this vapourware like crazy, since MS hadn't officially announced DX12 yet, even though they were already working on it before Xbox One (I believe that Xbox One was meant to be launched with DX12, but it wasn't ready in time. So instead they launched it with DX11 + an extra low-level API layer to do pretty much the same thing. Which is also why MS specifically said that DX12 wasn't going to bring any gains for Xbox One. The main advantage was that the API was brought to the PC as well, allowing easier cross-platform development between Xbox and PC).
If it really was AMD's idea, then MS and Sony would simply have used Mantle, instead of developing their own APIs.
But if you look at DX12, it supports various features that Mantle does not, yet both NV and Intel have support for these features in hardware (and AMD does not). How is that possible if it was AMD's idea?
Heck, even the feature of 'async compute' can be traced back to CUDA's 'Hyper-Q', long before Mantle, Vulkan or DX12 were around: http://developer.download.nvidia.com/compute/ … /doc/HyperQ.pdf

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 12 of 33, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

REYES is most definitely *NOT* raytracing though.
It's basically polygon rasterization. RenderMan promoted the use of things like shadowmapping and cubemaps for reflection/refraction effects.

At first, and for this reason it suffered when doing proper GI and true reflections (you could tell from the results o.0)... basically anything that required a 'bounce'. Rasterizers can't 'bounce'. Now I don't know what it is now, but even back when, it incorporated a form of stochastic ray casting iirc (although you are probably right, it only came about with Cars, a lot later than I thought 🙁). But yes Fair enough, Renderman traditonally not a raytracer... always has been and may always be this weird hybrid thing (not real-time).

Scali wrote:

There are a number of downsides to raytracing, which is why it never was very useful even for offline rendering.

o.0 ... Are we talking 'real-time'/60FPS here? Offline is the only useful...erm...use of it.

Scali wrote:

Some downsides include:
- Performance is awful. Acceleration sturctures only get you so far, because they don't really work for animated geometry, especially not with NURBS-based animation.
- Controlling quality is difficult as well. Since each ray is essentially 'isolated', you can't perform any kind of texture filtering based on gradient deltas (which are basically just partial derivatives). This makes performing things like trilinear or anisotropic filtering very troublesome. Usually it's just handled with bruteforce: apply a lot of supersampling to compensate.

waaaa.... why do you need anisotropic filtering though? isn't this kinda like an interpolation to compensate for a low sampled pixel? More samples, no problemo! Certainly high sample counts isn't as taboo as it is appears it with rasterisers... essentially they substitute quality in favour of speed...everything else they do is an attempt to regain that quality and photographic accuracy. Clutching at straws if you ask me. As performance increases, the desire to sacrifice this accuracy will diminish 😀.

Always the same arguments, usually centering around performance? Is there any other weapon you have in your arsenal against ray tracing? blah doubt it... mark my words... ray/path/tracing/real photorealism will rule supreme, probably not for a while, probably not in my life time.... we will probably have processing capabilities to do real-time fully fledged 1024 sampled path tracing, and still use ponsy rasterisers with oodles of shaders.... but one day, we turn around and say...OMG, this is like so fake.... whats that? I can have the real deal at 60FPS rather than 1000000FPS faked one... I want the real deal....coz I get all those realistic effects that *still* haven't been faked. All those things that a rasterizer ultimately can't do.

Did I mention FAKE!

Scali wrote:

For some reason, raytracing is the algorithm that people associate with photo-realistic rendering and CGI in movies. But in reality most movies are done with RenderMan or similar technology, not raytracing.

Erm.. thats because it is photo-realistic rendering! it models light propagation as a photon, you know light... that thing that is faked with a rasterizer because it decides... light is too hard to actually model, so I'll dumb it down and make it 'good enough'... then i'll add 'per-pixel' operations to do 'fakes' because I don't allow the user to manipulate anything outside built in extensions....but I still can't bounce...dang! Here's a word that isn't in a rasterizers vocab! 'Glooooballll Illlumina-shon'. And I don't want to hear about any sort of SSOA, because to be frank... it looks fake.

Scali wrote:

The liquid metal robot in Terminator 2? RenderMan with environment-mapping. Not raytracing.

T2 eh, looks like the Abyss to me o.0.. Back then yes (turn of the 90's) ... special fx were still very 'creative' with their technologies then (leveraging what they could without resorting to the already by that point well know but very time consuming/costly raytracing) and things were a lot different. However, come Jurassic Park (93 ish), ILM were rendering with ray/oath tracers, for ' The Fifth Element' digital domain had their own (nuke3D), Jim Henson creature workshop had their own in house renderer approx 97/98 iirc (and there was this flying Owl on the opening of Labryinth back in the 80's. I'm pretty sure that wasn't rasterized...although you'll probably correct me on that one o.0), DreamWorks had their own (and now use Arnold) all of which perform ray casting (which is fundamentally what a ray/path tracer does) and now pretty much anything no cartoony pixarlated i.e non renderman, is done this way. ... and even now many of the frames are compositions of multiple layers (like AO, particles (from volumetric rendering), diffuse interreflection etc), a lot of these 'baked' layers are constructed from modelling light *as is*, not a rasterized approximation 🤣 . It's not like everything to be rendered is put into the same scene and rely of the capabilities of the render engine to produce the final image.

Scali wrote:

RenderMan did get raytracing as an optional 'effect', and the movie Cars was the first to feature this effect, but it was mainly used for close details of reflections, refractions and such (cars have a lot of chrome etc). For the most part they still used cubemaps.
There's a nice paper on that: http://graphics.pixar.com/library/RayTr ... /paper.pdf

Yes.. apart from the obvious BuzzLight year helmet reflection/refraction requirement in ToyStory (environment map fail)... the content of their movies didn't require them to need this effect... they could get away with it. tbh they could have probably gotten away with it with Cars if you ask me. But then again the whole talking car thing ruined the realism for me anyhow. o.0

Either way, its good! because now reflections are NOT fake.

Scali wrote:

spiroyster wrote:
This processing power of the GPU may have once been used for 'shading', however they are no longer just that, so 'shader' is not only now an irrelavant description, but also misleading imo.

Using a *G*PU for anything non-graphics-related is misleading as well, so where do you draw the line? 😀

ah tou che! dunno... the line would be drawn in ray traced/non-rasterized way anyhow so it doesn't matter.

Scali wrote:

What is GLSL? 😀

Sounds like something both MS and nVidia wished they came up with ... har har
(yeah I didn't realise the *khronology*)

Scali wrote:
Microsoft and nVidia developed HLSL together. Cg is basically NV's attempt to add 'HLSL' to OpenGL as well. GLSL didn't exist ye […]
Show full quote

Microsoft and nVidia developed HLSL together. Cg is basically NV's attempt to add 'HLSL' to OpenGL as well. GLSL didn't exist yet at that time.
NV's compiler could basically compile 'HLSL' shaders to OpenGL ARB shader extensions (which were assembly-like). So it was generic for any SM2.0 hardware. For NV there were of course special extensions to the language so you could optimize for NV hardware and make full use of their shader extensions.

For some reason, instead of adopting and standardizing Cg for OpenGL (which could have saved us a LOT of trouble), they decided to (poorly) re-invent the wheel and come up with GLSL as the 'official' OpenGL shading language.
That put Cg in the same position as AMD's Mantle: nobody is going to touch it.

Yeah because it wasn't open standard... t'was nVidia standard. Khronos, the cartel formally known as ARB, likes Open 'peer' reviewed Che Guevara-wearing standards. Decided on by comrades for the greater good of the industry... not capitalist 'where the fuck do you think I want to go today/the way it is hopefully played' pigs!
Of course it's going to be rejected o.0

Scali wrote:
Except it wasn't AMD's idea to begin with. Consoles had low-level APIs for ages, and Microsoft and Sony developed their own APIs […]
Show full quote

Except it wasn't AMD's idea to begin with.
Consoles had low-level APIs for ages, and Microsoft and Sony developed their own APIs from AMD long before we heard anything of Mantle.

AMD basically 'borrowed' the ideas they got from MS and Sony's API and rehashed it into their own 'DX12-lite'. Then started to market this vapourware like crazy, since MS hadn't officially announced DX12 yet, even though they were already working on it before Xbox One (I believe that Xbox One was meant to be launched with DX12, but it wasn't ready in time. So instead they launched it with DX11 + an extra low-level API layer to do pretty much the same thing. Which is also why MS specifically said that DX12 wasn't going to bring any gains for Xbox One. The main advantage was that the API was brought to the PC as well, allowing easier cross-platform development between Xbox and PC).
If it really was AMD's idea, then MS and Sony would simply have used Mantle, instead of developing their own APIs.
But if you look at DX12, it supports various features that Mantle does not, yet both NV and Intel have support for these features in hardware (and AMD does not). How is that possible if it was AMD's idea?
Heck, even the feature of 'async compute' can be traced back to CUDA's 'Hyper-Q', long before Mantle, Vulkan or DX12 were around.

Yeah I appreciate that..I meant 'on the PC' platform, granted these systems are tightly integrated with the provided hardware API's. Single vendor providing the interfaces/access to multiple areas of the bespoke vendor’s system...and system which does more than draw stuff and 'compute', and the *G*PU has for while been able to be used in this general purpose fashion, just not by a single 'unified' API for compute and draw. Of course it was going to come along at some point by someone (be it the DirectX 'dream team') since all the architectures appeared to be pointing that way for a number of years prior (I think about 2006/7 I started hearing about GPGPU? Again I may have been late to the party???).

And tbh, you have certainly proved that this hardware has been pushed in more 'generic' ways for a number of years? Code flipping since 2003? 2006? Why has it taken so long for someone to bring this unified API to our table? What are these Sony and MS console API's of which you speak? Not the PS3 one (RSX == GeForce, and not nVidia specific afaik) o.0 ?. How do they differ? why wasn't Mantle used? What improvement do they offer?

But yeah, In all honest, never used HLSL/DX, so can't comapre. For all I know its some mystical instruction set that grants prosperity and wealth to all those that use. And yes I love ray tracing and have been living in hope since 2001ish.

FAKE!

Reply 13 of 33, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:

At first

No, not just 'at first'. REYES is a rendering algorithm, raytracing is a different algorithm.
RenderMan is now REYES + optional raytracing effects. But REYES is still the same rendering algorithm it always was: a subdividing rasterizer.

spiroyster wrote:

o.0 ... Are we talking 'real-time'/60FPS here? Offline is the only useful...erm...use of it.

There have been various 'realtime raytracers'. In fact, these days 'ray marching' is very popular, just check out the Shadertoy website.

spiroyster wrote:

waaaa.... why do you need anisotropic filtering though? isn't this kinda like an interpolation to compensate for a low sampled pixel? More samples, no problemo! Certainly high sample counts isn't as taboo as it is appears it with rasterisers... essentially they substitute quality in favour of speed...everything else they do is an attempt to regain that quality and photographic accuracy. Clutching at straws if you ask me. As performance increases, the desire to sacrifice this accuracy will diminish 😀.

I think you need to look up what it is that anisotropic filtering tries to do 😀
The problem is basically when you have polygons that are near-perpendicular to the eye. This means that the texel-to-pixel ratio is extremely high in at least one direction. For example, if you are looking at a polygon of a wall, but the entire polygon is only one pixel wide on screen, then the entire width of the texture is 'visible' on screen, and when you sample the texture for a single pixel, you need to use a filter that spans the entire width of the texture to avoid aliasing.
So it's not just a case of 'throw more AA at it'. The amount of AA required to properly handle such cases is impractical.

spiroyster wrote:

Always the same arguments, usually centering around performance? Is there any other weapon you have in your arsenal against ray tracing?

What makes you think this is some kind of 'war' where I need 'weapons' and 'arsenals'?

It's not, I'm just telling history: RenderMan was the first highly successful offline renderer, and it was not a raytracer. Reasons mentioned in the paper I linked, among others.
Don't attack me, attack Pixar and all other movie companies that chose RenderMan over raytracers.

spiroyster wrote:

I want the real deal....coz I get all those realistic effects that *still* haven't been faked. All those things that a rasterizer ultimately can't do.

Did I mention FAKE!

That's funny, since a raytracer is still pretty darn fake.
Especially classic Whitted raytracing, which is why people have been trying to develop all sorts of other hacks like Monte Carlo path tracing and photon mapping and such.
This is exactly the thing I mean: people seem to assume that raytracing is the holy grail of photo-realistic rendering. It's not.
It's just one of the oldest and most naive rendering methods known to man.
Many offline renderers are actually a combination of polygon rasterizers and raytracers. After all, there's absolutely no reason not to do the first bounce with a polygon rasterizer, if you are modeling with polygons anyway. A polygon rasterizer can produce perfect per-pixel coordinates and surface normals, which give you just as good a starting point for raytracing as a 'first bounce' from a raytracer does.
So software like 3DSMax will rasterize the scene first, then perform raytracing for additional effects, not too different from RenderMan.
In fact, various raytracers even use shadowmaps instead of firing shadow rays.

In the end it's all hacks-upon-hacks to try and get more realistic images. I believe in hybrid renderers, and not in choosing a single algorithm for everything (as they say, when all you have is a hammer, everything looks like a nail).

spiroyster wrote:

Erm.. thats because it is photo-realistic rendering! it models light propagation as a photon, you know light...

A photon tracer does, a classic (Whitted) raytracer does not. It traces light from the opposite direction, from the eye back to the lightsource, remember?
Which is fake, since that can't actually model global illumination effects and such. Light travels from the lightsource to the eye, and not all photons emitted by the lightsource hit the eye, so classic raytracing is not correct.
And photon tracers do not necessarily have to be combined with a raytracer to produce a final image. They could also be combined with rasterizers.

spiroyster wrote:

and the *G*PU has for while been able to be used in this general purpose fashion, just not by a single 'unified' API for compute and draw.

And what is D3D11 then?
D3D11 supports both compute and draw (compute shaders, remember? mentioned then a few posts ago, they were introduced in D3D11). Years before Mantle.

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 14 of 33, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++
appiah4 wrote:

ATi TRUFORM on R200 hardware was lightyears ahead of what the competition was doing

Quake 3 engine - Bézier curves.
Messiah/Sacrifice - custom tesselation engine with T&L support.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 15 of 33, by leileilol

User metadata
Rank l33t++
Rank
l33t++

Proprietary non-API software implementations are not the same thing. You might as well also state Unreal's Smoothmodels and the tesselation in the Win95B OpenGL screensavers

Messiah's wasn't actually real real-time tesselation, but models with pre-baked tesselation with excessive LOD levels. It's a real mess that never aged well, especially with the textures being baked high-res renders with the most lazy uvmapping ever, in a desperate bullshotting way. There was also no animation interpolation. None of it was magic. If you wanted real LOD magic to play that year there was Treadmarks' amazing blastable dynamic terrain

apsosig.png
long live PCem

Reply 16 of 33, by spiroyster

User metadata
Rank Oldbie
Rank
Oldbie
Scali wrote:

No, not just 'at first'. REYES is a rendering algorithm, raytracing is a different algorithm.
RenderMan is now REYES + optional raytracing effects. But REYES is still the same rendering algorithm it always was: a subdividing rasterizer.

I was refering to Renderman.

Scali wrote:

There have been various 'realtime raytracers'. In fact, these days 'ray marching' is very popular, just check out the Shadertoy website.

Absolutely 😉 getting close. One of the most impressive I found was Brigade (and that was like 4 years ago now).

Scali wrote:

I think you need to look up what it is that anisotropic filtering tries to do 😀
The problem is basically when you have polygons that are near-perpendicular to the eye. This means that the texel-to-pixel ratio is extremely high in at least one direction. For example, if you are looking at a polygon of a wall, but the entire polygon is only one pixel wide on screen, then the entire width of the texture is 'visible' on screen, and when you sample the texture for a single pixel, you need to use a filter that spans the entire width of the texture to avoid aliasing.
So it's not just a case of 'throw more AA at it'. The amount of AA required to properly handle such cases is impractical.

Yes, it would appear I do. Sampling the mip maps to produce a more perspective correct texture? I don't see why this still couldn't be done in a ray/path tracer? Material would still sample a texture same way, and jittering would still AA nicely?

Scali wrote:

What makes you think this is some kind of 'war' where I need 'weapons' and 'arsenals'?

It's not, I'm just telling history: RenderMan was the first highly successful offline renderer, and it was not a raytracer. Reasons mentioned in the paper I linked, among others.
Don't attack me, attack Pixar and all other movie companies that chose RenderMan over raytracers.

No I'm not attacking you Scali, just aggressively phishing 😉. My sarcasm maybe perhaps be too sarcastic and my use of the word fake a bit overzealous, tbh I generally believe what you say over my own understanding anyway. The 'war' comes from your blasphemous explanation of research subject close to my heart. And I don't think Renderman is as widespread used in the industry as you think. I can't back any of this up of course, but I just don't feel its discussed much.

Scali wrote:
That's funny, since a raytracer is still pretty darn fake. Especially classic Whitted raytracing, which is why people have been […]
Show full quote

That's funny, since a raytracer is still pretty darn fake.
Especially classic Whitted raytracing, which is why people have been trying to develop all sorts of other hacks like Monte Carlo path tracing and photon mapping and such.
This is exactly the thing I mean: people seem to assume that raytracing is the holy grail of photo-realistic rendering. It's not.
It's just one of the oldest and most naive rendering methods known to man.

Yes can't disagree with this description, however path tracing I do think is the holy grail imo? I don't see how some other methods can accurate model the illumination in all cases. How can a rasterizer accurately calculate true particle traversal without resorting to ray marching/volumetric methods?. Monte Carlo Path tracing and photon mapping are not hacks!?! (again blasphemous o.0) Monte Carlo is an optimisation to converge on the result quicker, a result produced by generating lots of light paths and probablistically deciding which routes are worth it (higher contributing). Quite the opposite of a hack though... more paths the merrier, except all paths are not worth it, which is where Monte Carlo comes in to selectively reduce the total number of paths required to end up on a (mathematically sound) result. Monte Carlo is usually sampled in a quasi random fashion which helps 'reduce cost'. Not a hack, fully fledged optimisation without loss of accuracy.

Scali wrote:

Many offline renderers are actually a combination of polygon rasterizers and raytracers. After all, there's absolutely no reason not to do the first bounce with a polygon rasterizer, if you are modelling with polygons anyway. A polygon rasterizer can produce perfect per-pixel coordinates and surface normals, which give you just as good a starting point for raytracing as a 'first bounce' from a raytracer does.
So software like 3DSMax will rasterize the scene first, then perform raytracing for additional effects, not too different from RenderMan.
In fact, various raytracers even use shadowmaps instead of firing shadow rays.

These maps are generated how though? What methods are used for baking here? There are many bounces required in most scenes, using the rasterizer to deduce depth from the eye is just one optimisation for the first bounce, it's not using rasterization to calculate material/colour contribution (or if it is, it is limited to Lambertian diffuse) so I don't really think that counts o.0 It ends up being essentially accelerated painters algorithm to deduce occlusion and depth from the camera.

Scali wrote:

In the end it's all hacks-upon-hacks to try and get more realistic images. I believe in hybrid renderers, and not in choosing a single algorithm for everything (as they say, when all you have is a hammer, everything looks like a nail).

🤣... Yeah agree, I do think this is what we will end up with (and already have), more usage of ray casting/marching, when the hardware is enough. As you say using the best of each method, or in cases using the only method that can give the result required. Personally I can’t wait o.0

Scali wrote:

A photon tracer does, a classic (Whitted) raytracer does not. It traces light from the opposite direction, from the eye back to the lightsource, remember?
Which is fake, since that can't actually model global illumination effects and such. Light travels from the lightsource to the eye, and not all photons emitted by the lightsource hit the eye, so classic raytracing is not correct.

A bi-directional path tracer is both though. I should clarify, I'm talking about path tracing (which ray tracing could be considered a subset of). Path tracers modelling all light paths and bounces from luminaires, do correctly calculate GI. Yes a Whitted ray tracer itself is limited to eye paths, but combining the eye paths from the Whitted with the light paths (connections) gives a very fast optimisation generating many possible paths with minimal casts....while retaining accuracy and introducing true global illumination while its at it.

Scali wrote:

And photon tracers do not necessarily have to be combined with a raytracer to produce a final image. They could also be combined with rasterizers.

Indeed, and for static geometry this gives probably the best and fastest result, but requires the photon map generation (generated probably by a traditional path-traced means, populating a spatial map with the photon interactions/light paths) in the first place. The photon mapping aspect simply buffers results to be sampled when rendering the final image. No dynamic geometry, but real-time results (not including precalcs).

Scali wrote:

And what is D3D11 then?
D3D11 supports both compute and draw (compute shaders, remember? mentioned then a few posts ago, they were introduced in D3D11). Years before Mantle.

Well, I was hoping you would tell me 😀. Like I said I have no experience with D3D, it's never been too far ahead of GL though (or vice versa). But yes 'compute shader' I didn't know was a D3D11 specific term <spits on pavement> thought it was just 'another name'... So I presume by this response that a 'unified API' came about with D3D11.

Reply 17 of 33, by The Serpent Rider

User metadata
Rank l33t++
Rank
l33t++

By the way GeForce 3 had similiar to TrueForm technology - RT-Patches. But it's support was dropped even faster, due to lack of interest from game developers.

I must be some kind of standard: the anonymous gangbanger of the 21st century.

Reply 18 of 33, by Scali

User metadata
Rank l33t
Rank
l33t
spiroyster wrote:

I was refering to Renderman.

Yes, but responding to "REYES is most definitely *NOT* raytracing though."
So I was talking about REYES, not RenderMan.

spiroyster wrote:

Yes, it would appear I do. Sampling the mip maps to produce a more perspective correct texture? I don't see why this still couldn't be done in a ray/path tracer? Material would still sample a texture same way, and jittering would still AA nicely?

Thing is, it's trivial with a polygon rasterizer because you already have the gradients of the texture coordinates.
With raytracing, you only have an intersection point on your surface. From a point you cannot derive the warping of the texture. So you'd need to do something clever to figure out how many samples to take from the texture and in which direction(s).
Anisotropic filtering does not necessarily have to work with mip maps. But you do need to know the... anisotropy (how 'non-square' your texels are projected onto the screen basically).

spiroyster wrote:

And I don't think Renderman is as widespread used in the industry as you think. I can't back any of this up of course, but I just don't feel its discussed much.

Well, there's probably plenty of sites on that.
See here for example: https://renderman.pixar.com/view/movies-and-awards
That's quite a lot of big-name movies every year which use RenderMan.
RenderMan was also the first piece of software to ever receive an Oscar.

spiroyster wrote:

Yes can't disagree with this description, however path tracing I do think is the holy grail imo? I don't see how some other methods can accurate model the illumination in all cases. How can a rasterizer accurately calculate true particle traversal without resorting to ray marching/volumetric methods?.

Funny you should ask... Rasterizing is not necessarily a 2d operation. The last few generations of NV hardware support conservative rasterization, which also allows you to render triangles into 3D volumetric textures. One application of this is realtime global illumination:
http://www.geforce.com/whats-new/articles/max … ion-of-graphics
http://research.nvidia.com/publication/intera … el-cone-tracing
https://developer.nvidia.com/vxgi

Voxel cone tracing... not quite rasterizing, not quite raytracing, but something in-between.

spiroyster wrote:

Monte Carlo is usually sampled in a quasi random fashion which helps 'reduce cost'. Not a hack, fully fledged optimisation without loss of accuracy.

I think this contradiction is exactly my point.

spiroyster wrote:

These maps are generated how though? What methods are used for baking here?

Depends on what you're going for.
Since technically a shadowmap is also a 'first bounce', it makes sense to use a rasterizer.
However, you could also use a raytracer, and just use the shadowmap as a sort of 'cache', so you don't need to test shadow rays against all sorts of objects. All you have to do is sample one 'object': the shadowmap.

spiroyster wrote:

There are many bounces required in most scenes

The first bounce is usually the most important though. For diffuse surfaces, there are no additional bounces required at all. And for surfaces that are reflective or refractive in some way, the indirect light will usually just be small 'detail', and can be calculated at a lower level-of-detail to speed up rendering without any visible impact.

spiroyster wrote:

using the rasterizer to deduce depth from the eye is just one optimisation for the first bounce, it's not using rasterization to calculate material/colour contribution (or if it is, it is limited to Lambertian diffuse) so I don't really think that counts o.0

Say what?
Firstly, as I thought we had covered before, the actual 'shading' is not really part of rasterizing. A shading function will simply require certain input parameters (eye position, lightsources, surface normal, colourmaps etc), and will produce an output colour for that pixel.
In theory, the exact same shading function can be used in a raytracer, a rasterizer, or any other kind of renderer.
Secondly, why would it only be Lambertian diffuse? Given the above, you can just run the full shading equation for the first bounce directly from the rasterizer, just as if it was a first bounce from a ray.

Perhaps it's time you start seeing polygon rasterizing as simply an optimized first-bounce raytracer. Because in a way, that's what it is. It projects a polygon to screen-space in a way that is equivalent to raytracing. Then it exploits the fact that a polygon is a linear shape to interpolate all pixels inside the polygon.
Instead of sorting surfaces along the ray and picking the nearest one, you sort them with a z-buffer.

spiroyster wrote:

🤣... Yeah agree, I do think this is what we will end up with (and already have), more usage of ray casting/marching, when the hardware is enough. As you say using the best of each method, or in cases using the only method that can give the result required. Personally I can’t wait o.0

Well, get yourself a PowerVR GPU then. They are already doing raytracing: https://www.imgtec.com/powervr/ray-tracing/

spiroyster wrote:

Path tracers modelling all light paths and bounces from luminaires, do correctly calculate GI.

No, they approximate it.
Firstly they sample a lot less paths than there would be photons in the real world.
Secondly, the physical lighting models are simplified (and in fact, we don't even understand light in all detail yet, so we couldn't make a fully correct model even if we wanted to).

http://scalibq.wordpress.com/just-keeping-it- … ro-programming/

Reply 19 of 33, by agent_x007

User metadata
Rank Oldbie
Rank
Oldbie

@Scali/@spiroyster
1) How much does Photogrammetry help with rendering quality vs Raytracing (I mean, effects can look really nice to me) ?
2) (On Topic) I guess RV200 (Radeon 7500), has the same capabilities and limitations as R100 ?
3) Compute Shader was introduced in DirectX 10 (when NV went "CUDA" for the first time - G80 chip).
As for DirectX 11, Hull and Domain Shaders are the only new ones there (AFAIK).

157143230295.png