05: Shaders
We can now draw lines and triangles wherever we want. It does feel a bit static though. Both types of objects have a fixed color and it is a bit cumbersome to work with stuff. For example, if we wanted to copy a triangle multiple times to multiple places, we have to manually recompute all the vertices and create new geometry.
We will now introduce two powerful operations into our rasterizer that allow us to implement a large variety of effects with relative ease (although we are still missing some functionality to make it really shine).
These operations are called shaders and we will implement two types: Vertex shaders and fragment shaders.
Usually, you would write them in a specialized language, such as GLSL or HLSL
As adding the shader functionality won't require any interesting algorithms, we will just look at it and instead do little applied example afterwards: Parametrically positioning and coloring objects!
You can find the full rasterization code here: Rasterizer 05
Vertex shader
A vertex shader operates on vertices (the points making up lines and triangles), specifically one vertex at a time, all independently. This could for example allow us to easily parallelize the process. Even in JavaScript, Web Workers could be utilized. In the next step, we will allow vertex shaders to output custom attributes, which we will automatically interpolate and use as input for the next shader.
For now, the vertex shader has only one purpose: Determine the final vertex positions used in rasterization.
When we want to keep the same functionality that we had before, the output of the vertex shader is just passing along the current vertex position.
How do we implement this?
In our draw operation, we create an array with the same length as the input vertex array.
We then call the vertex shader stored in the pipeline object for each vertex and store the result in the corresponding position in the new array.
Afterwards we just do what we have done before, just having replaced the old array with the new one.
One additional part will be passing data to our shader. For now, this will be be just a constant block of data: Uniform variables.
This again does not require much engineering.
We will put a field uniform_data into our Pipeline class and pass that as an argument to the vertex shader.
The shaders will also be a field in the Pipeline class called program.
This field is just an object with two fields, for the vertex and fragment shaders.
class Pipeline {
constructor({
viewport = {
x: 0,
y: 0,
w: 0,
h: 0
},
framebuffer = Framebuffer.new(),
clip_planes = [],
uniform_data = {},
program = null,
} = {}) {
this.viewport = viewport;
this.clip_planes = clip_planes;
this.framebuffer = framebuffer;
this.uniform_data = uniform_data;
this.program = program;
}
}
As we start with the vertex shader, here is how we define one that doesn't do anything:
// the program object to be stored in the pipeline
const program = {
// the vertex shader is just a function taking
// attributes and uniforms
vertex_shader : (attributes, uniforms) => {
return attributes[Attribute.VERTEX]);
}
};
...
// set the pipeline's current program
pipeline.program = program;
// contains data that is passed to all calls to the vertex_shader in a draw command
pipeline.uniform_data = ...
There will be an additional parameter for the vertex shader, so we can actually pass data along to the fragment shader, but this will require an interpolation mechanism that we will implement in the next section.
Here is the new draw member function with the vertex shader functionality added:
/**
* Draw the given geometry
* @param {Pipeline} pipeline The pipeline to use
* @param {Object} geom Geometry object
* specifying all information
*/
draw(pipeline, geom) {
// no vertex shader
if (!pipeline.program) {
return;
}
const program = pipeline.program;
// no vertices
// we could also take a parameter specifying
// the number of vertices to be
// drawn and not rely on vertex data
if (!geom.attributes[Attribute.VERTEX]) {
return;
}
const vertices = geom.attributes[Attribute.VERTEX];
const n = vertices.length;
// process vertices
const transformed_points = new Array(n);
// Buffer variable to prevent having to create a
// new map for each vertex, as
// they share the same attributes
let vertex_attributes = [];
for (let i = 0; i < n; i++) {
// copy attributes in buffer
for (const [key, values] of Object.entries(geom.attributes)) {
vertex_attributes[key] = values[i];
}
// call vertex shader
transformed_points[i] =
program.vertex_shader(
vertex_attributes, pipeline.uniform_data
);
}
// go through objects
if (geom.topology === Topology.LINES) {
// handles lines
// handle two vertices per step
for (let i = 0; i < n; i += 2) {
this.process_line(
pipeline,
transformed_points[i], transformed_points[i + 1]
);
}
} else if (geom.topology === Topology.TRIANGLES) {
// handle triangles
// handle three vertices per step
for (let i = 0; i < n; i += 3) {
this.process_triangle(
pipeline,
transformed_points[i], transformed_points[i + 1],
transformed_points[i + 2]
);
}
}
}
For convenience, we copy the attributes for each vertex into an object, so the shader can access it by its key.
If you compare this with the previous draw function, basically only the additional array appeared and we call the vertex shader with the appropriate data.
In the next step, we will define the fragment shader.
Fragment shader
The fragment shader is called for each fragment that is produced by the rasterization, so after the vertex shader. "Fragment" in this context means "information that will make up a pixel and is produced by the rasterization". It is the common nomenclature for this kind of data. In our case, one fragment will produce one pixel, but you can extend it, as real implementations do and allow operations such as multisampling, where you produce multiple fragments to color a pixel. Sometimes people call it pixel shader as well, since that is basically what it does, generate pixels. But the name fragment is the more "official" one, and makes i clear, that you don't just produce pixels directly.
The fragment shader will do two things:
- Write a color into the output buffers. So far we only have one, but you can configure that to output multiple values!
- Return
true, if the fragment should be rendered andfalseotherwise. This simple addition allows us to skip rasterization for pixels, which we could use to cut out parts of an object.
We define this together with the vertex shader. And start again with creating the same functionality as before, although this will be slightly different, as we hardcoded different colors for lines and triangles and this shader will produce the same (for now).
There is one additional parameter for the fragment shader, that we will ignore for now. We could have ordered it differently or put parameters into a more dynamic parameter object, but we chose this to make the definition of the fragment shader agree with the one starting next section and to keep it simple.
const program = {
vertex_shader : (attributes, uniforms) => {
return mult(uniforms.M,attributes[Attribute.VERTEX]);
},
/**
* @param {AbstractMat} frag_coord The coordinate of
* the fragment to be processed
* @param {Object} data Interpolated date for the
* fragment (next section)
* @param {Object} uniforms Uniform data
* @param {Object} output_colors An object where the
* output colors are stored in fields corresponding
* to the output images
*/
fragment_shader : (frag_coord, data, uniforms, output_colors) => {
// write out a fixed color into the first output image
output_colors[0] = vec4(1, 0, 0, 1);
// return true means: rasterize this fragment
return true;
}
};
Including this is actually pretty simple. Currently, our line and triangle rasterization contain a few lines to write the pixel color, only differing on the color used:
// the current pixel writing portion for lines and triangles
...
// the final fragment coordinate
const frag_coord = vec4(px.at(0), px.at(1), 0.0, 1.0);
// run fragment shader with data
// buffer for colors
const output_colors = {};
// we currently hardcode one output color to be put
// into the one output image
output_colors[0] = vec4(1, 0, 0, 1);
this.write_fragment(pipeline, frag_coord, output_colors);
...
We now add the fragment shader, where we replaced the last two lines:
...
// the final fragment coordinate
const frag_coord = vec4(px.at(0), px.at(1), 0.0, 1.0);
// run fragment shader with data
// buffer for colors
const output_colors = {};
// call the fragment shader instead of setting a
// constant color
// the second "data" parameter is an empty array for
// now and will be added in the next section
const do_write_fragment =
program.fragment_shader(
frag_coord, {}, pipeline.uniform_data,
output_colors
);
// only write the fragment, if the shader says so
if (do_write_fragment) {
this.write_fragment(
pipeline, frag_coord,
output_colors
);
}
...
With these simple changes, we can already do a lot, which we will show in the next step.
Writing our first shaders
Now that we have the shader mechanism ready, we will use it to make our drawing operations configurable.
We will also define a simple helper class that we can use going forward to simplify this process. This course won't get into the details, but in general you will specify the way an object is positioned in the world using three paramters:
- Position
- Scale (in all 3 axes)
- Orientation (Rotation)
The nice thing is, that you can specify them all using matrices!
The easiest to write is scale.
We define three scaling factors for the three axes and scaling a vector is just scaling each component
We can also write that as a matrix multiplication
Translating a point is done as a simple addition of the translation vector : . Due to some mathematical reasons (linear maps preserve zero but a translation move the origin), we can't represent a 3D translation with a matrix. Luckily, we already made some preparations during clipping to have a neat way to compute the clip plane distances and define the clip planes: Adding an extra at the end of the vector (for points).
We can then represent a translation the following way:
You can veryfiy this by computing the matrix product.
Rotations are a bit more complicated and there are different ways to define/parametrize them, for example Euler angles or axis-angle representations.
We will just use for the rotation matrix.
Now the nice thing is, that we can chain together transformations by just multiplying the matrices to the left of the current matrix! This is very handy, as we can combine all transformations into one matrix and then apply that final transform by just multiplying by the combined matrix.
Usually, we will have the order . This can get arbitrarily complex: , where any of these matrices could be the identity.
One thing you might have noticed is that the dimensions do not match, since the translation is , while the others are . This is very easy to fix though: Just put the 3D matrices into the upper left part of a 4D identity matrix.
Luckily, in code we already have methods to compute these matrices:
/**
* Creates a 4x4 translation matrix for a given
* translation vector
*
* @param {AbstractMat} t - 3D translation vector
* @returns {Mat} A translation matrix
*/
jsm.translation(t)
/**
* Creates a 4x4 scaling matrix for a given scaling vector.
* This vector contains the scaling factors for each dimension
*
* @param {AbstractMat} s - 3D scaling vector
* @returns {Mat} The scaling matrix
*/
jsm.scaling(s);
/**
* Computes a 4x4 3D rotation matrix, which represents
* a rotaion around an axis
*
* @param {AbstractMat} axis - The axis to rotate around
* @param {number} angle - The angle to rotate in rad
* @returns {Mat} The rotation matrix
*/
jsm.axisAngle4(axis, angle);
We now define a function that makes defining a full transformation a bit easier:
function transform({
pos = vec3(0.0, 0.0, 0.0),
scale = vec3(1.0, 1.0, 1.0),
rot = jsm.MatF32.id(4, 4)
}) {
return mult(
jsm.translation(pos),
mult(
rot,
jsm.scaling(scale)
));
}
This handles creation of matrices and multiplying the matrix order for us. The rotation is asked for directly, so you can use different method to compute them.
We now define a helper class that bundles geometry, a transformation and a material. The material just contains any data that we want to use for specifying the appearance of an object and we just append it to the uniform when rendering. It basically specifies what the object is "made of".
class Renderable {
constructor(geometry, {
local_transform = jsm.MatF32.id(4, 4),
material = {}
} = {}) {
this.geometry = geometry;
this.material = material;
// this transforms a point from the
// local space into the world
this.local_to_world = local_transform;
// compute the inverse to transform back a point
this.world_to_local = jsm.inv(local_transform);
}
static new() {
return new Renderable(...arguments);
}
}
You can do everything without these, of course, but it makes writing stuff a bit easier.
Now we want to put this all into action!
Below you can find the scene setup, where geometries are specified with a material that contains a color. They are then rendered, and the transformation matrix, as well as the material per object is placed in the uniform object that is passed to the shaders.
Write the shaders, such that they transform the objects based on the transformation matrix uniforms.M and write out the color uniforms.material.color!
We also use a simple helper function create_plane_geometry_xy that just creates a geometry object for a rectangle in 2D with coordinates in and .
As usual you can see the solution below.
Exercise:
-
Go to the
vertex_shaderin pipeline.js- Transform the vertex (already returned) by the model matrix
uniforms.M
- Transform the vertex (already returned) by the model matrix
-
Go to the
fragment_shaderin pipeline.js- Write out the color given in
uniforms.material.colorinstead of the currently fixed one
- Write out the color given in
Solution: