This is part of the Rendering Tutorial

There's plenty of stuff that can go wrong when you're trying to display faces and other 3D objects on a 2D display. I'll give an overview of common problems, give possible solutions for these problems, and explain what I did in the case I solved it in my rendering engine.

Rendering 3D faces, such as the walls and roof parts of our house, can be daunting if you're doing it for the first time and give confusing results if you don't watch it carefully. There's a bit of work involved in displaying faces correctly, and I'll give a first important technique you'll have to implement to make sure the faces of your objects don't look like crap.

I'm not going to discuss how you can draw faces or polygons on your screen, as it is out of scope for this tutorial. There's a huge amount of literature available on this subject. If you're writing a pixel based renderer (in contrast to my engine that works completely with vectors) you might want to do (gouraud or phong) shading or texturing of your faces, but I'm not going to discuss this.

A thing that can go wrong is shown in the picture to the right. Faces show up in the wrong order (this is not a bug in my rendering engine, I can easily turn it on and off from my main program). Some faces should be in front of others, but they show up covered by the face that is supposed to be in the back. Your computer is unaware of this and just displays them in the order you tell him to. So it's a good idea to sort your faces before displaying them, such that your rendering engine displays the faces in the back before showing the ones in front.

The question is how to determine which face is in front of another. The answer is simple (but not sound, as I will show later). For a face, you can easily determine the depth of the face by looking at the depth, or Z values of the coordinates constituting the face. A face is in front of another one if all of its coordinates' Z values are smaller than the other face's coordinates' depths. We can make it easier by computing the average depth of all coordinates that make up a face, and compare these depths. We can sort all faces with decreasing average depth, and display the faces in that order. The picture on the right shows the same view of the house as the one above, but with all of its faces in the right order.

There's one last thing I'd like to show on this matter. Some people don't know how to sort correctly. I have met students who can't transform a sorting algorithm that results in an increasing order to a sorting algorithm with decreasing order, so don't be surprised (it only takes two seconds to find the "<" or ">" and make it the other one). I told you to sort them with a *decreasing* depth and not the other way around! The picture on the right shows you the results of sorting everything the wrong way. It gives some kind of an inside-out view of the house. If you see this in your rendering engine, don't come complaining with me because I'll just laugh.

I will provide an alternative way of sorting faces that was helpful in the design of my engine. Since my renderer uses the FIG format as an output format, I was able to abuse a feature in the FIG format that allowed me to put faces in front of others where needed. The FIG file format has a depth layer system, in which you can assign all 2D objects to a certain depth ranging from 0 to 999. So, in order to put faces in front of one another, I can resort to this as a simulation of sorting the faces myself. The FIG format doesn't have any details on what happens when you have more than one object in the same layer. It is not specified whether one object is drawn on top of the other if they're both on one layer, so it wouldn't help me sorting them anyways.

Before I start rendering my 3D objects to 2D objects, I first query all the objects what kind of a depth range they will require for displaying them later in 2D objects. Upon rendering, I give the objects a chunk of FIG's layers and let the objects themselves figure out in which layers they belong in order to build a FIG file in which the further objects end up in the deeper layers. This allows me to easily construct good 2D images of my 3D worlds. I am only limited to 1000 depth layers, but it fits my current needs.

This depth buffer is commonly known as a Z buffer.

Since I assign a chunk of the layers upon rendering, I can do cool things with my rendering engine (although these will only be used in this tutorial). For example, I can easily switch the minimum and maximum layer they may use and show you what kind of image you might expect when you don't know what sorting is all about and do it upside down (as shown in the third image in this tutorial part). I can also assign only one layer to the whole world and show you what goes wrong when you don't sort (the first image in this part).

There's some other interesting things I can do with those chunks of layers, too. I can tell the renderer to put certain objects in a range of layers that is on top of some other objects, although the object on top should be in the bottom. I have used this trick to create the world views in the pictures. First, I draw the house in the bottom half of the available layers. After that, I use the top layers to display the X, Y and Z axes and the wireframe of the house on top of it. Using wireframes and dotted and dashed line styles, the axes and the wireframe seem to be part of the whole rendered images, but it's fraud. You may accuse me of cheating, but I think it's simply an excellent feature of my renderer. Of course you can do this with your engine too by deliberately sorting some objects wrong before displaying them.

I have already pointed out that the use of average depths is not sound, and may result in faulty images. I will now go into that.

In the picture on the right you can see what goes wrong in my rendering engine when I try to render 2 specifically crafted faces. In the world view, the blue face can only be seen as a line. It should, however, be clear from the world view that the blue face is supposed to sit closer to the camera, or in front of the red face when we're displaying both faces. However, as the red face is bigger than the blue face, and since it has some coordinates that are much closer to the camera than the blue face's coordinates, it's average depth becomes smaller than the average depth of the coordinates in the blue face. This results in the red face being rendered in front of the blue face. And this can be a difficult problem to solve. I have currently no solution for this in my rendering engine. There are a couple of ways to solve this problem if you really need to get it out of the way.

One trick would be to execute an algorithm for "hidden line removal" or "hidden polygon removal" on your polygons. But this is not easy to implement. The hidden line removal algorithm seeks line segments of your polygons that are hidden after faces that should come in front of them, and then cuts the line segments (together with chunks of the polygon that are hidden) out of the polygons. As such, only the visible part (or parts, if one polygon cuts through another one) of a polygon remains in the rendering engine, and the faces show up correctly. It is pretty complicated to get a good working hidden line remover to work in a 3d engine in order for faces to show up correctly. There's plenty of possible cases to work out to make sure all line segments that are hidden are removed, and extra tricks (like cutting polygons up to figure out its constituting triangles) are needed to discover whether a line segment is hidden or not. Sticking to hidden line removal is easier than investigating hidden polygon segment removal, as cutting out a polygon part from a polygon is a lot harder than cutting out a line segment from a line. I will not discuss this technique further as it is out of scope for this tutorial.

A last thing I need to point out is faces that intersect each other. They can create faulty renderings as can be seen in the image on the right. The blue and red face intersect each other along the X axis, but they do not show up intersected when rendered in the camera view.

Indeed, when we display the faces in our camera view, we ask the computer to render a red and a blue face. The computer has no idea they should intersect, and that they should show up in parts. Solving this problem is also a quest on its own. I'll give some pointers on how you could implement a solution. I have not implemented this in my rendering engine. (Note that the world view in the image on the right shows up correctly as I've been cheating by showing two red faces in the world view instead of only one)

In order to detect the intersections of faces, it's a good idea to triangulate your polygons first. Triangulation is always helpful to detect intersections of faces. It can also be very helpful when detecting hidden lines and hidden polygon segments as pointed out in a hidden segment removal algorithm as above.

In the intersection algorithm, you will have to detect faces, or triangles that intersect each other, which is quite doable with a bit of math. According to the intesections you might have found, you can cut up the triangles into smaller triangles following the intersections of the faces. Keep on doing this until no more intersections were found, and the problem is solved. Note that you can execute this algorithm before rendering it to the screen, as the intersections are independent of the rotation of your objects (intersections rotate along) and you only need to find the intersections once when displaying the objects from different camera views. This is in contrast with the hidden line removal algorithm, which requires you to recompute the hidden line segments when the objects are viewed from a different angle, as the polygons will cover other hidden segments.

comments powered by Disqus