The so-called "EMBM" (hereafter called DX6 bumpenv, since true EMBM is much more complicated, as I will explain) is not one of m […]
Show full quote
The so-called "EMBM" (hereafter called DX6 bumpenv, since true EMBM is much more complicated, as I will explain) is not one of my favorite features...
If you consider the math behind it, it falls apart pretty quickly. Consider what would be required for true per-pixel environment-mapped bump mapping. You'd want to compute a reflection vector at each pixel, using the standard reflection equation and a normal map for the normals, and you'd want to look that up in some sort of environment map (spheremap, cubemap, whatever).
First of all, DX6 bumpenv is limited to looking up your reflection value in a single 2D texture. That limits you to spheremapping almost right off the bat.
It allows you to use your first texture to perturb the texture coordinates in the second texture. It looks up a (du,dv) pair in the first texture, and it multiplies this pair by a 2x2 matrix. Note that the 2x2 matrix can only be specified per primitive (equivalent to outside of a Begin/End, and thus not per vertex).
If (du,dv) is zero, the surface is flat at that point and you get no offset. So, this means that you would want to compute your second texture coordinate (the one for the spheremap) at each vertex using standard spheremap texgen to get the right results. (It's worth mentioning at this point that DirectX doesn't support spheremap texgen, so this is slightly painful...)
Now, you still have to figure how to set up that 2x2 matrix to get accurate environment mapping. Basically, you're doing a local linear approximation of a nonlinear function. The nonlinear function is:
perturbed spheremap coordinates = reflection_vector_to_spheremap_coords(reflection_v ector(N, E))
reflection_vector is quadratic in N, and reflection_vector_to_spheremap_coords involves a square root and some other stuff.
If your surface is flat, N and the slopes of the surface, which are the values encoded in your du,dv texture, are closely related. In fact, if du,dv are scaled correctly, N = (-du, -dv, sqrt(1-du*du-dv*dv)). If you conjugate all these functions, and if you are willing to assume that E is constant, i.e. the viewer is infinite, you can express the perturbed spheremap coordinates (s,t) as a function of (du,dv). Let's do a local linear approximation using some simple multivariable calculus:
(s) = (ds/ddu ds/ddv) (du) + (s0)
(t) (dt/ddu dt/ddv) (dv) (t0)
where (s0,t0) is simply the value of (s,t) when (du,dv) = (0,0).
Lo and behold, this is the bumpenv equation! Look up (du,dv), multiply by a 2x2 matrix, add in a base value for the texture coordinate, and use the result for your next texture coordinate lookup.
But in the process of deriving this equation, we've made the following simplifications:
- The environment map is in a spheremap. (cubemaps are much nicer to use)
- The surface is flat. (if you've noticed, most of the demos that use bumpenv use flat surfaces; not all, but most)
- The reflection vector is locally linear. (an approximation and can fall apart easily)
- The spheremap coordinates are locally linear w.r.t. the reflection vector. (another approximation)
- The viewer is an infinite viewer. (another approximation; can sometimes work, sometimes causes trouble)
In the end, you've made enough approximations that, mathematically speaking, it is nothing other than a hack.
Now, here's it's saving grace: our brains are REALLY BAD at figuring out whether reflections are accurate!
But the approximations are bad enough that it restricts the cases in which you can actually use the technique, and it does produce lower-quality results. Most bumpenv hardware also has some pretty nasty restrictions on the resolution of the environment map; I think the G400 implementation requires that it must be 32x32 or smaller, or something along those lines.
Fortunately, there is hope in sight. First, although it's slow, Cass did write a demo that can approximate true EMBM using a SW hack that involves reading back the framebuffer:
http://www.nvidia.com/marketing/deve...Frame?OpenPage
Also, with 3D hardware advancing quickly, someone's bound to put the real per-pixel reflection vector calculation in hardware at some point. It wasn't feasible back when bumpenv was first developed (Bitboys' Pyramid3D part; popular web culture assigns credit for the invention to Matrox, but they only popularized it).