Welcome back to this third and final installment in our WebGL Essentials mini-series. In this lesson, we'll take a look at lighting and adding 2D objects to your scene. There's a lot of new information here, so let's dive straight in!
Light
Lighting can be the most technical and difficult aspect of a 3D application to understand. A firm grasp of lighting is absolutely essential.
How Does Light Work?
Before we get into the different kinds of light and code techniques, it's important to know how light works in the real world. Every light source (eg: a light bulb, the sun, etc) generates particles called photons. These photons bounce around objects until they eventually enter our eyes. Our eyes convert the photons to produce a visual "picture". This is how we see. Light is also additive, meaning that an object with more color is brighter than an object with no color (black). Black is the complete absence of color, whereas white contains all colors. This is an important distinction when working with very bright or "over saturating" lights.
Brightness is just one principle that has multiple states. Reflection, for example, can have a variety of different levels. An object, like a mirror, can be completely reflective, whereas other objects can have a matte surface. Transparency determines how objects bend the light and cause refraction; one object can be completely transparent while others can be opaque (or any stage in between).
The list continues, but I think you can already see that light is not simple.
If you wanted even a small scene to simulate real light, it would run at something like 4 frames an hour, and that's on a high-powered computer. To get around this problem, programmers use tricks and techniques to simulate semi-realistic lighting at a reasonable frame rate. You have to come up with some form of compromise between realism and speed. Let's take a look at a few of these techniques.
Before I start elaborating on different techniques, I would like to give you a small disclaimer. There is a lot of controversy on the exact names for the different lighting techniques, and different people will give you different explanations on what "Ray Casting" or "Light Mapping" is. So before I start getting the hate mail, I would like to say that I am going to use the names that I learned; some people might not agree on my exact titles. In any case, the important thing to know is what the different techniques are. So without further ado, let's get started.
You have to come up with some form of compromise between realism and speed.
Ray Tracing
Ray tracing is one of the more realistic lighting techniques, but it is also one of the more costly. Ray tracing emulates real light; it emits "photons" or "rays" from the light source and bounces them around. In most ray tracing implementations, the rays come from the "camera" and bounce onto the scene in the opposite direction. This technique is usually used in films or scenes that can be rendered ahead of time. This is not to say that you can't use ray tracing in a real-time application, but doing so forces you to tone down other things in the scene. For example, you might have to reduce the amount of "bounces" the rays should perform, or you can make sure there are no objects that have reflective or refractive surfaces. Ray tracing can also be a viable option if your application has very few lights and objects.
If you have a real-time application, you may be able to precompile parts of your scene.
If the lights in your application don't move around or only move around in a small area at a time, you can precompile the lighting with a very advanced ray tracing algorithm and recalculate a small area around the moving light source. For example, if you are making a game where the lights don't move around, you can precompile the world with all the desired lights and effects. Then, you can just add a shadow around your character when he moves. This produces a very high quality look with a minimal amount of processing.
Ray Casting
Ray casting is very similar to ray tracing, but the "photons" don't bounce off objects or interact with different materials. In a typical application, you would basically start off with a dark scene, and then you would draw lines from the light source. Anything the light hits is lit; everything else stays dark. This technique is significantly faster than ray tracing while still giving you a realistic shadow effect. But the problem with ray casting is its restrictiveness; you don't have a lot of room to work with when trying to add effects like reflections. Usually, you have to come up with some kind of compromise between ray casting and ray tracing, balancing between speed and visual effects.
The major problem with both of these techniques is that WebGL does not give you access to any vertices except the currently active one.
This means you either have to perform everything on the CPU (as apposed to the graphics card), or you have make a second shader that calculates all the lighting and stores the information in a fake texture. You would then need to decompress the texture data back into the lighting information and map it to the vertices. So basically, the current version of WebGL is not very well suited for this. I'm not saying it can't be done, I'm just saying WebGL won't help you.
Shadow Mapping
Ray tracing can also be a viable option if your application has very few lights and objects.
A much better alternative to ray casting in WebGL is called shadow mapping. It gives you the same effect as ray casting, but it uses a different approach. Shadow mapping will not solve all your problems, but WebGL is semi-optimized for it. You can think of it as kind of a hack, but shadow mapping is used in real PC and console applications.
So what is it you ask?
You have to understand how WebGL renders its scenes in order to answer this question. WebGL pushes all the vertices into the vertex shader, which calculates the final coordinates for each vertex after the transformations are applied. Then to save time, WebGL discards the vertices that are hidden behind other objects and only draws the essential objects. If you remember how ray casting works, it just casts light rays onto the visible objects. So we set the "camera" of our scene to the light source's coordinates and point it in the direction we want the light to face. Then, WebGL automatically removes all the vertices that are not in view of the light. We can then save this data and use it when we render the scene to know which of the vertices are lit.
This technique sounds good on paper but it has a few downsides:
- WebGL doesn't allow you to access the depth buffer; you need to be creative in the fragment shader when trying to save this data.
- Even if you save all the data, you still have to map it to the vertices before they go into the vertex array when you render your scene. This requires extra CPU time.
All these techniques require a fair amount of tinkering with WebGL. But I will show you a very basic technique for producing a diffuse light to give a little personality to your objects. I wouldn't call it realistic light, but it does give your objects definition. This technique uses the object's normals matrix to calculate the angle of the light compared to the object's surface. It is quick, efficient, and doesn't require any hacking with WebGL. Let's get started.
Adding Light
Let's start by updating the shaders to incorporate lighting. We need to add a boolean that determines whether or not the object should be lit. Then, we need the actual normals vertex and transform it so that it aligns with the model. Finally, we need to make a variable to pass the final result to the fragment shader. This is the new vertex shader:
<script id="VertexShader" type="x-shader/x-vertex"> attribute highp vec3 VertexPosition; attribute highp vec2 TextureCoord; attribute highp vec3 NormalVertex; uniform highp mat4 TransformationMatrix; uniform highp mat4 PerspectiveMatrix; uniform highp mxat4 NormalTransformation; uniform bool UseLights; varying highp vec2 vTextureCoord; varying highp vec3 vLightLevel; void main(void) { gl_Position = PerspectiveMatrix * TransformationMatrix * vec4(VertexPosition, 1.0); vTextureCoord = TextureCoord; if (UseLights) { highp vec3 LightColor = vec3(0.15, 0.15, 0.15); highp vec3 LightDirection = vec3(0.5, 0.5, 4); highp vec4 Normal = NormalTransformation * vec4(VertexNormal, 1.0); highp float FinalDirection = max(dot(Normal.xyz, LightDirection), 0.0); vLightLevel = (FinalDirection * LightColor); } else { vLightLevel = vec3(1.0, 1.0, 1.0); } } </script>
If we do not use lights, then we just pass a blank vertex to the fragment shader and its color stays the same. When lights are turned on, we calculate the angle between the light's direction and the object's surface using the dot function on the normal, and we multiply the result by the light's color as a sort of mask to overlay onto the object.
Picture of surface normals by Oleg Alexandrov.
This works because the normals are already perpendicular to the object's surface, and the dot function gives us a number based on the angle of the light to the normal. If the normal and the light are almost parallel, then the dot function returns a positive number, meaning the light is facing the surface. When the normal and the light are perpendicular, the surface is parallel to the light, and the function returns zero. Anything higher than 90 degrees between the light and the normal results in a negative number, but we filter this out with the "max zero" function.
Now let me show you the fragment shader:
<script id="FragmentShader" type="x-shader/x-fragment"> varying highp vec2 vTextureCoord; varying highp vec3 vLightLevel; uniform sampler2D uSampler; void main(void) { highp vec4 texelColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t)); gl_FragColor = vec4(texelColor.rgb * vLightLevel, texelColor.a); } </script>
This shader is pretty much the same from earlier parts of the series. The only difference is that we multiply the texture's color by the light level. This brightens or darkens different parts of the object, giving it some depth.
That's all for the shaders, now let's go to the WebGL.js
file and modify our two classes.
Updating our Framework
Let's start with the GLObject
class. We need to add a variable for the normals array. Here is what the top portion of your GLObject
should now look like:
function GLObject(VertexArr, TriangleArr, TextureArr, ImageSrc, NormalsArr) { this.Pos = { X : 0, Y : 0, Z : 0}; this.Scale = { X : 1.0, Y : 1.0, Z : 1.0}; this.Rotation = { X : 0, Y : 0, Z : 0}; this.Vertices = VertexArr; //Array to hold the normals data this.Normals = NormalsArr; //The Rest of GLObject continues here
This code is pretty straight forward. Now let's go back to the HTML file and add the normals array to our object.
In the Ready()
function where we load our 3D model, we have to add the parameter for the normals array. An empty array means the model did not contain any normals data, and we will have to draw the object without light. In the event that the normals array contains data, we will just pass it onto the GLObject
object.
We also need to update the WebGL
class. We need to link variables to the shaders right after we load the shaders. Let's add the normals vertex; your code should now look like this:
//Link Vertex Position Attribute from Shader this.VertexPosition = this.GL.getAttribLocation(this.ShaderProgram, "VertexPosition"); this.GL.enableVertexAttribArray(this.VertexPosition); //Link Texture Coordinate Attribute from Shader this.VertexTexture = this.GL.getAttribLocation(this.ShaderProgram, "TextureCoord"); this.GL.enableVertexAttribArray(this.VertexTexture); //This is the new Normals array attribute this.VertexNormal = this.GL.getAttribLocation(this.ShaderProgram, "VertexNormal"); this.GL.enableVertexAttribArray(this.VertexNormal);
Next, let's update the PrepareModel()
function and add some code to buffer the normals data when it is available. Add the new code right before the Model.Ready
statement at the bottom:
if (false !== Model.Normals) { Buffer = this.GL.createBuffer(); this.GL.bindBuffer(this.GL.ARRAY_BUFFER, Buffer); this.GL.bufferData(this.GL.ARRAY_BUFFER, new Float32Array(Model.Normals), this.GL.STATIC_DRAW); Model.Normals = Buffer; } Model.Ready = true;
Last but not least, update the actual Draw
function to incorporate all these changes. There is a couple changes here so bear with me. I'm going to go piece by piece through the entire function:
this.Draw = function(Model) { if(Model.Image.ReadyState == true && Model.Ready == false) { this.PrepareModel(Model); } if (Model.Ready) { this.GL.bindBuffer(this.GL.ARRAY_BUFFER, Model.Vertices); this.GL.vertexAttribPointer(this.VertexPosition, 3, this.GL.FLOAT, false, 0, 0); this.GL.bindBuffer(this.GL.ARRAY_BUFFER, Model.TextureMap); this.GL.vertexAttribPointer(this.VertexTexture, 2, this.GL.FLOAT, false, 0, 0);
Up to here is the same as before. Now comes the normals part:
//Check For Normals if (false !== Model.Normals) { //Connect The normals buffer to the Shader this.GL.bindBuffer(this.GL.ARRAY_BUFFER, Model.Normals); this.GL.vertexAttribPointer(this.VertexNormal, 3, this.GL.FLOAT, false, 0, 0); //Tell The shader to use lighting var UseLights = this.GL.getUniformLocation(this.ShaderProgram, "UseLights"); this.GL.uniform1i(UseLights, true); } else { //Even if our object has no normals data we still have to pass something //So I pass in the Vertices instead this.GL.bindBuffer(this.GL.ARRAY_BUFFER, Model.Vertices); this.GL.vertexAttribPointer(this.VertexNormal, 3, this.GL.FLOAT, false, 0, 0); //Tell The shader to use lighting var UseLights = this.GL.getUniformLocation(this.ShaderProgram, "UseLights"); this.GL.uniform1i(UseLights, false); }
We check to see if the model has normals data. If so, it connects the buffer and sets the boolean. If not, the shader still needs some kind of data or it will give you an error. So instead, I passed the vertices buffer and set the UseLight
boolean to false
. You could get around this by using multiple shaders, but I thought this would be simpler for what we are trying to do.
this.GL.bindBuffer(this.GL.ELEMENT_ARRAY_BUFFER, Model.Triangles); //Generate The Perspective Matrix var PerspectiveMatrix = MakePerspective(45, this.AspectRatio, 1, 1000.0); var TransformMatrix = Model.GetTransforms();
Again this part of the function is still the same.
var NormalsMatrix = MatrixTranspose(InverseMatrix(TransformMatrix));
Here we calculate the normals transformation matrix. I will discuss the MatrixTranspose()
and InverseMatrix()
functions in a minute. To calculate the transformation matrix for the normals array, you have to transpose the inverse matrix of the object's regular transformation matrix. More on this later.
//Set slot 0 as the active Texture this.GL.activeTexture(this.GL.TEXTURE0); //Load in the Texture To Memory this.GL.bindTexture(this.GL.TEXTURE_2D, Model.Image); //Update The Texture Sampler in the fragment shader to use slot 0 this.GL.uniform1i(this.GL.getUniformLocation(this.ShaderProgram, "uSampler"), 0); //Set The Perspective and Transformation Matrices var pmatrix = this.GL.getUniformLocation(this.ShaderProgram, "PerspectiveMatrix"); this.GL.uniformMatrix4fv(pmatrix, false, new Float32Array(PerspectiveMatrix)); var tmatrix = this.GL.getUniformLocation(this.ShaderProgram, "TransformationMatrix"); this.GL.uniformMatrix4fv(tmatrix, false, new Float32Array(TransformMatrix)); var nmatrix = this.GL.getUniformLocation(this.ShaderProgram, "NormalTransformation"); this.GL.uniformMatrix4fv(nmatrix, false, new Float32Array(NormalsMatrix)); //Draw The Triangles this.GL.drawElements(this.GL.TRIANGLES, Model.TriangleCount, this.GL.UNSIGNED_SHORT, 0); } };
You can easily view the source of any WebGL application to learn more.
This is the rest of the Draw()
function. It's almost the same as before, but there is the added code that connects the normals matrix to the shaders. Now, let's go back to those two functions I used to get the normals transformation matrix.
The InverseMatrix()
function accepts a matrix and returns its inverse matrix. An inverse matrix is a matrix that, when multiplied by the original matrix, returns an identity matrix. Let's look at a basic algebra example to clarify this. The inverse of the number 4 is 1/4 because when 1/4 x 4 = 1
. The "one" equivalent in matrices is an identity matrix. Therefore, the InverseMatrix()
function returns the identity matrix for the argument. Here is this function:
function InverseMatrix(A) { var s0 = A[0] * A[5] - A[4] * A[1]; var s1 = A[0] * A[6] - A[4] * A[2]; var s2 = A[0] * A[7] - A[4] * A[3]; var s3 = A[1] * A[6] - A[5] * A[2]; var s4 = A[1] * A[7] - A[5] * A[3]; var s5 = A[2] * A[7] - A[6] * A[3]; var c5 = A[10] * A[15] - A[14] * A[11]; var c4 = A[9] * A[15] - A[13] * A[11]; var c3 = A[9] * A[14] - A[13] * A[10]; var c2 = A[8] * A[15] - A[12] * A[11]; var c1 = A[8] * A[14] - A[12] * A[10]; var c0 = A[8] * A[13] - A[12] * A[9]; var invdet = 1.0 / (s0 * c5 - s1 * c4 + s2 * c3 + s3 * c2 - s4 * c1 + s5 * c0); var B = []; B[0] = ( A[5] * c5 - A[6] * c4 + A[7] * c3) * invdet; B[1] = (-A[1] * c5 + A[2] * c4 - A[3] * c3) * invdet; B[2] = ( A[13] * s5 - A[14] * s4 + A[15] * s3) * invdet; B[3] = (-A[9] * s5 + A[10] * s4 - A[11] * s3) * invdet; B[4] = (-A[4] * c5 + A[6] * c2 - A[7] * c1) * invdet; B[5] = ( A[0] * c5 - A[2] * c2 + A[3] * c1) * invdet; B[6] = (-A[12] * s5 + A[14] * s2 - A[15] * s1) * invdet; B[7] = ( A[8] * s5 - A[10] * s2 + A[11] * s1) * invdet; B[8] = ( A[4] * c4 - A[5] * c2 + A[7] * c0) * invdet; B[9] = (-A[0] * c4 + A[1] * c2 - A[3] * c0) * invdet; B[10] = ( A[12] * s4 - A[13] * s2 + A[15] * s0) * invdet; B[11] = (-A[8] * s4 + A[9] * s2 - A[11] * s0) * invdet; B[12] = (-A[4] * c3 + A[5] * c1 - A[6] * c0) * invdet; B[13] = ( A[0] * c3 - A[1] * c1 + A[2] * c0) * invdet; B[14] = (-A[12] * s3 + A[13] * s1 - A[14] * s0) * invdet; B[15] = ( A[8] * s3 - A[9] * s1 + A[10] * s0) * invdet; return B; }
This function is pretty complicated, and to tell you the truth, I don't fully understand why the math works. But I have already explained the gist of it above. I did not come up with this function; it was written in ActionScript by Robin Hilliard.
The next function, MatrixTranspose()
, is a lot simpler to understand. It returns the "transposed" version of its input matrix. In short, it just rotates the matrix on its side. Here's the code:
function MatrixTranspose(A) { return [ A[0], A[4], A[8], A[12], A[1], A[5], A[9], A[13], A[2], A[6], A[10], A[14], A[3], A[7], A[11], A[15] ]; }
Instead of going in horizontal rows (i.e. A[0], A[1], A[2] ...) this function goes down vertically (A[0], A[4], A[8] ...).
You're good to go after adding these two functions to your WebGL.js
file, and any model that contains the normals data should be shaded. You can play around with the light's direction and color in the vertex shader to get different effects.
There is one last topic that I wish to cover, and that is adding 2D content to our scene. Adding 2D components on a 3D scene can have many benefits. For example, it can be used to display coordinate information, a mini map, instructions for your app, and the list goes on. This process is not as straight forward as you might think, so let's check it out.
2D V.S. 2.5D
HTML will not let you use the WebGL API and the 2D API from the same canvas.
You might be thinking, "Why not just use the canvas's built in HTML5 2D API?" Well, the problem is that HTML will not let you use the WebGL API and the 2D API from the same canvas. Once you assign the canvas' context to WebGL, you cannot use it with the 2D API. HTML5 simply returns null
when you try to get the 2D context. So how then do you get around this? Well, I'll give you two options.
2.5D
2.5D, for those who are unaware, is when you put 2D objects (objects with no depth) in a 3D scene. Adding text to a scene is an example of 2.5D. You can take the text from a picture and apply it as a texture to a 3D plane, or you can get a 3D model for the text and render it in your screen.
The benefits to this approach is that you don't need two canvases, and it would be faster to draw if you only used simple shapes in your application.
But in order to do things like text, you either need to have pictures of everything you want to write, or a 3D model for each letter (a little over the top, in my opinion).
2D
The alternative is to create a second canvas and overlay it on top of the 3D canvas. I prefer this approach because it seems better equipped for drawing 2D content. I am not going to start making a new 2D framework, but let's just create a simple example where we display the coordinates of the model along with its current rotation. Let's add a second canvas to the HTML file right after the WebGL canvas. Here is the new canvas along with the current one:
<canvas id="GLCanvas" width="600" height="400" style="position:absolute; top:0px; left:0px;"> Your Browser Doesn't Support HTML5's Canvas. </canvas> <canvas id="2DCanvas" width="600" height="400" style="position:absolute; top:0px; left:0px;"> Your Browser Doesn't Support HTML5's Canvas. </canvas>
I also added some inline CSS to overlay the second canvas on top of the first. The next step is to create a variable for the 2D canvas an get its context. I am going to do this in the Ready()
function. Your updated code should look something like this:
var GL; var Building; var Canvas2D; function Ready(){ //Gl Declaration and Load model function Here Canvas2D = document.getElementById("2DCanvas").getContext("2d"); Canvas2D.fillStyle="#000"; }
At the top, you can see that I added a global variable for the 2D canvas. Then, I added two lines to the bottom of the Ready()
function. The first new line gets the 2D context, and the second new line sets the color to black.
The last step is to draw the text inside the Update()
function:
function Update(){ Building.Rotation.Y += 0.3 //Clear the Canvas from the previous draw Canvas2D.clearRect(0, 0, 600, 400); //Title Text Canvas2D.font="25px sans-serif"; Canvas2D.fillText("Building" , 20, 30); //Object's Properties Canvas2D.font="16px sans-serif"; Canvas2D.fillText("X : " + Building.Pos.X , 20, 55); Canvas2D.fillText("Y : " + Building.Pos.Y , 20, 75); Canvas2D.fillText("Z : " + Building.Pos.Z , 20, 95); Canvas2D.fillText("Rotation : " + Math.floor(Building.Rotation.Y) , 20, 115); GL.GL.clear(16384 | 256); GL.Draw(Building); }
We start by rotating the model on its Y axis, and then we clear the 2D canvas of any previous content. Next, we set the font size and draw some text for each axis. The fillText()
method accepts three parameters: the text to draw, the x coordinate, and the y coordinate.
The simplicity speaks for itself. This may have been a bit of overkill to draw some simple text; you could have easily just written the text in a positioned <div/>
or <p/>
element. But if you are doing anything like drawing shapes, sprites, a health bar, etc, then this is probably your best option.
Final Thoughts
In the scope of the last three tutorials, we created a pretty nice, albeit basic, 3D engine. Despite its primitive nature, it does give you a solid base to work off of. Moving forward, I suggest looking at other frameworks like three.js or glge to get an idea of what is possible. Additionally, WebGL runs in the browser, and you can easily view the source of any WebGL application to learn more.
I hope you've enjoyed this tutorial series, and like always, leave your comments and questions in the comment section below.
Comments