09: Application: Turn on the light

This will be a shorter section, as we will only briefly cover a small application that we can do to with our current system.

You can find the full rasterization code here: Rasterizer 09

While we can add colors to our objects via textures, attributes and uniforms, it looks a bit... flat. What makes games and other things not look like that (one of the reasons at least) is lighting. That is the interaction of light with the surfaces.

This is a really big topic, so we will only implement the basic, but time-tested lighting model by Bui Tuong Phong: The Phong reflection model. While it is today superseeded by more easier to configure or more realistic models, it is still a good introduction and gives the desired effects of lighting.

We will place a number nln_l of lights in our scene. The lights will be point lights, meaning they are infinitely small and shine in each direction with the same intensity. This is again a simplification, you can add more types of lights as you please.

Each light ii emits colored light Li\mathbf{L}_i, given as a 3D RGB vector. It is positioned at a position pl,i\mathbf{p}_{l,i}.

The surface is described by a material diffuse color md\mathbf{m}_d and a specular color ms\mathbf{m}_s. The diffuse color models how much light is reflected when light enters the object and gets reflected in all directions equally. This corresponds to the object color or the texture. The specular color models light reflecting from the surface itself. For non-metals, this color is generally white (meaning everything is reflected). Metals do produce reflection colors other than white, although the Phong model isn't very physically accurate. The values are in the range [0,1][0,1] for all components. The diffuse and specular components should not sum to more than 11, though as this model isn't physically accurate anyways, you can assign values relatively freely.

In the following we will assume that all vectors and points are in view-space. It doesn't actually matter, what coordinate system you use, but you have to be consistent, and view-space has at least one property that makes the formulas simpler.

We evaluate the Phong model at a point p\mathbf{p} on a surface. The surface at that point is described by its (interpolated) normal n^\hat{\mathbf{n}}. The ^\hat{\cdot} symbolizes normalized vectors. The normal is given as an attribute, so it has to be processed in the vertex shader and declared as an output. A bit below, we also discuss, what happens with the normal during transformations. To declare normal attributes, we extend the Attribute define. The utility functions to generate geometry will fill this attribute on their own.

const Attribute = {
    VERTEX: 0,
    UV: 1,
    NORMAL: 2,
};

The light vector l^\hat{\mathbf{l}} is the vector from the surface point to the light:

l^=pl,ippl,ip\hat{\mathbf{l}} = \frac{\mathbf{p}_{l,i} - \mathbf{p}}{||\mathbf{p}_{l,i} - \mathbf{p}||}

The perfect reflection of the light r^\hat{\mathbf{r}} is defined by the vector reflection formula:

r^=l^2(n^(l^))n^.\hat{\mathbf{r}} = -\hat{\mathbf{l}} - 2 (\hat{\mathbf{n}} \cdot (-\hat{\mathbf{l}})) \hat{\mathbf{n}}.

This function is already defined for you as reflect(i, n), where the first input is the incidence vector, in this case l^-\hat{\mathbf{l}}.

Our own viewing direction is described by v^\hat{\mathbf{v}} and points from the surface point to the camera position. Here we can see the reason for calculating all vectors in view space, since the camera position there is just the zero vector! So in view space the view vector is:

v^=p^\hat{\mathbf{v}} = -\hat{\mathbf{p}}

While lighting calculations are described here as vectors, they are not really vectors in the usual sense. You can add them, but also multiply them, which is done per component. This isn't really correct, but good enough. We will write this "color multiplication" with the symbol \bigotimes. In code you can get this componentwise multiplication with the cwiseMult(a,b) function, that takes vectors/matrices.

The diffuse component Ld,i\mathbf{L}_{d,i} of the Phong model just returns the amount of light falling onto a surface patch attenuated by the material color. This amount depends on the angle of the light direction and the normal (the surface). You can imagine a small flashlight. If you hold a light perpendicular to the surface (same direction as normal) all of the light will shine directly on the patch below. If you angle the light, the same brightness of the flashlight will be spread out over more area, thus decreasing the brightness at each fixed area surface patch. Geometrically, this can be described by the cosine of the angle between the normal and the light direction. It describes the "projected area" of a surface patch in the direction of the light. We don't actually need to compute a cosine though, since our vectors are normalized and so the dot product will give us the desired value:

Ld,i=(n^l^)(mdLi) \mathbf{L}_{d,i} = (\hat{\mathbf{n}}\cdot \hat{\mathbf{l}} ) (\mathbf{m}_d \bigotimes \mathbf{L}_i)

Attention: Light is not supposed to "shine through", so we need to clamp the dot product to be non-negative. You just need to take the maximum of it with 00.

The specular component Ls,i\mathbf{L}_{s,i} measures, how much we are looking into the ideal reflection r^\hat{\mathbf{r}} . For a perfect mirror, you only see the light reflected in that exact direction. The rougher a surface is, the more we see of that reflection even when not looking directly. This behaviour is controlled with a parameter α\alpha, the "shininess". The higher the value, the more mirror-like the surface becomes. The falloff is modelled by a cosine and the full formula looks like this:

Ls,i=(r^v^)α(msLi) \mathbf{L}_{s,i} = (\hat{\mathbf{r}}\cdot \hat{\mathbf{v}} )^{\alpha} (\mathbf{m}_s\bigotimes \mathbf{L}_i)

Attention: Just like the diffuse part, light is not supposed to "shine through", so we need to clamp the dot product to be non-negative. You just need to take the maximum of it with 00. We generally also want to set this term to zero, if n^l^\hat{\mathbf{n}}\cdot \hat{\mathbf{l}} is negative, since in that case, the light arrived from a backside.

In the real world, a lot of lighting is indirect, it bounces off other surfaces until it hits the one we actually see. This is very expensive to calculate, so we don't do it. The Phong model uses just a constant directionless "ambient light", that roughly models the average light from the surroundings. We will call it La\mathbf{L}_{a} and weigh it by the diffuse material.

If you want surfaces, that emit light (although not onto other surfaces), you can add an additional object value Le\mathbf{L}_{e}, though it isn't necessary.

With all of that, the final color L\mathbf{L} that we can write into the color buffer is:

L=Le+mdLa+inlLd,i+Ls,i=Le+mdLa+inl((n^l^)md+(r^v^)αms)Li\begin{align*} \mathbf{L} &= \mathbf{L}_{e} + \mathbf{m}_d \bigotimes \mathbf{L}_{a} + \sum_i^{n_l} \mathbf{L}_{d,i} + \mathbf{L}_{s,i} \\ &= \mathbf{L}_{e} +\mathbf{m}_d \bigotimes \mathbf{L}_{a} + \sum_i^{n_l} ((\hat{\mathbf{n}}\cdot \hat{\mathbf{l}} ) \mathbf{m}_d + (\hat{\mathbf{r}}\cdot \hat{\mathbf{v}} )^{\alpha} \mathbf{m}_s) \bigotimes \mathbf{L}_i \\ \end{align*}

We can now implement the phong lighting model! Most of the work is done in the shaders, where the vertex shader forwards the needed information and the fragment shader computes the model.

There is one important thing about the normals though, since we are transforming them from the local model space to the view space. The normal is defined by a constant dot product relation: un\mathbf{u} \cdot \mathbf{n}. Here is a short derivation, what n\mathbf{n} needs to be, if we were to transform the direction u\mathbf{u} by some transform M\mathbf{M}, such that un=(Mu)n\mathbf{u} \cdot \mathbf{n} = (\mathbf{M}\mathbf{u}) \cdot \mathbf{n}'.

We use the relation between matrix multiplication and the dot product: ab=aTb\mathbf{a} \cdot \mathbf{b} = \mathbf{a}^T\mathbf{b}.

un=uTn=uTMT(MT)1n=(uTMT)((MT)1n)=(Mu)((MT)1n)=un\begin{align*} \mathbf{u} \cdot \mathbf{n} &= \mathbf{u}^T \mathbf{n} \\ &= \mathbf{u}^T \mathbf{M}^T (\mathbf{M}^T)^{-1}\mathbf{n} \\ &= (\mathbf{u}^T \mathbf{M}^T) ((\mathbf{M}^T)^{-1}\mathbf{n}) \\ &= (\mathbf{M} \mathbf{u}) \cdot ((\mathbf{M}^T)^{-1}\mathbf{n}) \\ &= \mathbf{u}' \cdot \mathbf{n}' \\ \end{align*}

So the normal transforms with the transposed inverse of the transformation matrix (MT)1=(M1)T=MT(\mathbf{M}^T)^{-1} = (\mathbf{M}^{-1})^{T} = \mathbf{M}^{-T} for vectors. Since vectors and normals don't have translation, we only need the upper 3×33\times 3 transformation matrix for this.

Also, from this we see, that for pure rotations, the normals transform like directions, since the transpose is equal to the inverse.

We will precompute the transpose inverse of the model view matrix and store it as MV_ti in the uniforms.

Lights are defined as uniforms and their positions are transformed into view space for easy use.

The solution is below.

Exercise:

Solution:

See the solution

Now that we have lighting, we will come to the last part of this course: Blending.