GP Mailing List
ATXGPSIG List

Drawing PolygonsBob Pendleton Download Demo It's been a busy couple of months since I wrote my last column. I got a wild idea to write a column about drawing polygons. I figured it was going to be easy to write because I knew all about polygons. After all, polygon code I wrote is out in the world in real live products. Yeah, right.Code hidden away in a compiled program has to work, but very few people will ever see it and critique it. Writing for publication makes you think very carefully about what you say. If you make a mistake in your code you might have to fix it or maybe explain it to a customer service rep., but if you make a mistake in print you get letters from people all over the world. I learned that lesson when I first started posting on ARPANET mailing lists. Didn't take long to find out that nearly everything I knew was wrong, or at least someone somewhere thought it was wrong. I decided that I better refresh my knowledge about polygons before I dusted of some old code and wrote about it. I'm glad I did. So, let's start with some basics and some definitions. A polygon is a bounded region of a plane. It is bounded by a series of line segments. A line segment is a piece of a line that runs from one point to another point. The a line segment end point is called a vertex. Each vertex is shared by exactly two line segments. The simplest way to represent a polygon is as a list of its vertices. You just assume that there is a line segment connecting the first vertex on the list with the second, the second with the third, and so on until you reach the last vertex. The last vertex is connected to the first vertex to close the polygon boundary. Triangles are the simplest polygons. If you have less than three sides you don't have a polygon, you have an odd sort of a line. There are three different kinds of polygons. I call them "convex," "concave," and "complex" polygons. I've run into several different names for all but the convex polygons. Be warned, the terms I use may not be the ones you find in graphics textbooks and articles. Convex polygons have the property that a line drawn from any point on the boundary of the polygon to any other point on the boundary of the same polygon will never cross another edge of the polygon. This means that no matter where you stand in a room shaped like a convex polygon you can see into every corner of the room just by turning around, you never have to walk to another part of the room to look around a corner. Triangles are always convex. Squares, rectangles, and regular pentagons are examples of other kinds of convex polygons. A nasty special case of the convex polygons are the "degenerate" polygons. Degenerate polygons have all their vertices in a straight line. Degenerate polygons send naive rendering algorithms into infinite loops. Concave polygons have sides that are "caved" in. The outline of the letter "W" is a concave polygon. Every room in my house is also a concave polygon. Concave polygons fail the edge crossing test that defines convex polygons. But, the edges of a concave polygon never cross each other. Complex polygons are a real mess. The edges of complex polygons cross each other. Want to get really nasty? Some people even allow complex polygons have holes in them. Holes that are defined by other potentially complex polygons. You want an example? You may have learned to draw a five pointed star as a series of five connected lines that cross each other. You draw the star without removing the pen from the paper. That star is a complex polygon. Sadly, you can get an "accidental" complex polygon by turning convex polygon so you see just its edge. You expect to get a line, a degenerate polygon, because all the vertices are supposed to be in one plane. But, a little round off error will quickly push the vertices out of their common plane. Fixed point and floating points numbers are only approximations. It's a good bet that the vertices of the polygons you're trying to render are not and never were in the same plane. Convex polygons are the easiest to draw, concave polygons are harder to draw and complex polygons are a real pain. There are at least three different names for the process of drawing into a frame buffer. I've seen it called "rendering" which means to make a copy of something. I like the term because when you draw a polygon you are making a visible copy of its representation in memory. Another term I've seen a lot is "scan conversion." This means drawing it as a set of scan lines. It also implies that you want to draw it in scan line order, from the top of the screen to the bottom of the screen, to match the way the electron beam scans out the picture on the tube. The term comes from the days before frame buffers when the picture had to be scan converted on the fly for each frame. The only other term I've ever seen used to describe drawing into a frame buffer is "rasterization." The root word for "rasterization" comes from a word meaning "to scrape" and a "raster" is "the area on which the image is reproduced in a kinescope" and "kinescope" turns out to be an old trademark for a kind of picture tube invented in 1924 by Vladimir Zworykin while working for Westinghouse Electric Corporation. That makes a raster the picture part of a CRT and rasterization is the process of drawing something on a CRT. I love words that mean exactly what they are supposed to mean. As much as I like the word "rasterization" I'm going to use the word "render" to mean the process of drawing a geometric primitive, such as a polygon or line, into a frame buffer. It's much easier to type than rasterization. At first glance rendering polygons seems pretty simple. But, there are a lot of gotcha's and some very subtle questions that have to be answered. What qualities make a good polygon rendering algorithm? Here are some of the qualities I'm looking for.
You draw a polygon into a frame buffer by finding all the pixels that are inside the boundary of the polygon and filling them in. Well, almost. There is a little problem with pixels on the edges of polygons. A pixel is not a point. A pixel is a rectangle, hopefully a square, that contains an infinite number of points. A vertex is a point. Just as I can stand in a lot of different places on a football field a vertex can fall in a lot of different places on a pixel. An edge is a line that runs from one vertex to another. That means that an edge starts somewhere inside a pixel and ends inside another pixel and splits all the pixels in between into two odd shaped chunks. Think of a piece of thread stretched out on a tile floor. The tiles are pixels, the thread is an edge, and the ends of the thread are vertices. If pixels that fall on an edge are counted as part of the polygon then pixels on an edge between two polygons will be filled in by both polygons. Filling the pixel twice is wasteful. And, in an animation it can cause a "twinkling" effect along the edge. If edge pixels are not part of the polygon then edges never get colored and you get gaps between polygons. Really ugly. There are several solutions to the edge problem. You can split the edge pixels between the polygons by figuring out how much of the pixel is in each polygon and computing a weighted average of the color of the polygons at each pixel and color the edge pixels with the this average color. This process is called "antialiasing" and produces very pretty pictures. It is also pretty slow if you don't have special purpose hardware for computing the edge colors. You can figure out which polygon owns the majority of the pixel and assign the pixel to that polygon, Or, you can just say that edge pixels on the left and top edges of a polygon belong to the polygon and pixels on the right and bottom edges don't. This way each polygon gives up an average of half a pixel one side and gains it back on the other side. The result is not as pretty as antialiasing, but it is simple. And, it can be implemented by just being careful with the conditions in the drawing loops. The "Top and left in, bottom and right out" rule does cause some surprising results. If you try to draw a degenerate polygon or even a very narrow one this rule will result in nothing at all being drawn. Also, very narrow parts of polygons won't be drawn. The result of this rule is that when you turn a polygon on edge you don't get a line, you get nothing at all. This is the rule I've used in my example code. I've used it because it doesn't slow down rendering and it eliminates gaps between polygons and keeps you from writing a pixel more than once. So, how do you decide which pixels are inside a polygon? This sounds like a pretty simple problem until you take a look at complex polygons. We humans can look at a convex or concave polygon and see the boundaries. Computers can't. When something is big enough people can't see the boundaries either. There are at least three well defined ways to decide what part of a plane is inside a polygon. I'm only going to talk about one of them, the parity test. The idea of the parity test is to trace a line from outside the polygon (text books usually say "from infinity") to the other side of the polygon (negative infinity) and count how many times the line crosses an edge of the polygon. If the line has crossed an odd number of edges it's inside the polygon and if it's crossed an even number of edges it's outside the polygon. Imagine you're out walking and come to an odd shaped building. You walk through a door and you're inside. You've crossed one edge of the polygon that describes the outside of the building. You walk down a corridor and leave the building through another door. You've crossed another edge of the polygon and you're outside. What if the building is shaped like an "E?" If your path crosses all the arms of the "E" you will wind up passing through four more doors before you are finally all the way through the building. A frame buffer is a two dimensional array of pixels. Each row of pixels in the frame buffer tells the video hardware what color to draw along a scan line of the display screen. This means that we can draw a polygon by coloring the parts of the scan line that are inside the polygon and the parity test is just perfect for telling when a scan line is inside a polygon. For any scan line find all the places that it crosses a polygon edge, sort the crossings by their X coordinate, and fill in the part of the scan line between each pair of crossings. This technique works as long as we record an even number of edge crossings for each scan line. A problem occurs when a vertex falls exactly on a scan line. If we count the vertex on the scan line as a scan line crossing then that crossing point will be generated twice because each vertex is part of two edges. If it is counted twice then we will wind up with an odd number of crossings for that scan line. If we don't count it then the crossing count will be off by one the other way and we still wind up with an odd number of crossings and a really ugly mess on the screen. The solution to the problem is to move any vertices that fall on scan lines off of the scan line by just one bit. This has no visible effect on the polygon and it keeps us from having to watch for special cases when we are computing where scan lines and edges cross. To compute where an edge crosses a scan line you need to know which scan lines it crosses and to do that you need to know where the scan lines are. Scan lines run horizontally through the middle of a row of pixels. When we refer to a pixel at (X, Y) the coordinates actually designate the upper left hand corner of the pixel. The pixel at (X, Y) has a scan line running through it at Y+0.5 and the center of the pixel is at (X+0.5, Y+0.5). If pixels are like tiles the integer grid lines that pixel coordinates are on are like the cracks between tiles. To find the first scan line an edge crosses you first sort the two vertices by their Y coordinates. Then you pick the vertex with the lowest Y coordinate, subtract 0.5 from the Y coordinate to map scan lines from half integer to integer boundaries, truncate to get an integer value, and add 1 to get to the pixel containing the first scan line the edge crosses. Or, you can just round up the Y value, add 0.5 and truncate, to get the same result. If you round the Y coordinate of the vertex with the highest Y coordinate you get last scan line the edge crosses. Code that takes care of the vertex on a scan line problem and maps a value to pixel coordinates looks like this: int adjust(fp14 val) { if (fpFract(val) == fpHalf) { val++; } return fp2int(val + fpHalf); } Adjust() first tests to see if the value falls on a scan line by testing to see if the fractional part of the value is exactly equal to 0.5 If it is then it adds one to the value. Notice that the value is a fixed point value and I'm adding and integer 1 to it. One is the fixed point "epsilon" value. It is the smallest difference, other than zero, that you can ever have between two fixed point numbers. So I'm really adding a tiny fraction (one bit) to the number. Not enough to notice. Finally it rounds the value and returns the resulting integer. The "fp" functions and data types are all defined in "fixed.h" and are from the fixed point arithmetic package I described in my last column. A slightly improved version of my fixed point package is include with the source code for this article. The following routine starts with a pair of vertices that define an edge and produces a list of all the scan lines that cross the edge and the X coordinates of the crossings: typedef struct { fp14 x, y; } point; typedef struct { fp14 x; } edgeRec; void edge(edgeRec *table, point *pt1, point *pt2) { fp14 x, dx; int idy, iy1, iy2; if (pt2>y < pt1>y) { exchange(point *, pt1, pt2) } iy1 = adjust(pt1>y); iy2 = adjust(pt2>y); idy = iy2  iy1; if (idy == 0) { return; } idy = max(2, idy  1); x = pt1>x; dx = fpDiv((pt2>x  pt1>x), int2fp(idy)); do { table[iy1].x = x; x += dx; iy1++; } while(iy1 < iy2); } The first thing edge() does is order the points so that y1 is less than or equal to y2. That step make the rest of the routine a lot simpler. It makes sure the you can count on y2  y1 being greater than or equal to zero and it means you know that the first point is above the second point on the screen. After that Y coordinates of both vertices are adjusted so they are now the Y coordinate of the first pixel and the last pixel in which the edge crosses a scan line. The next step is to give up if the edge starts and ends in the same row of pixels. If idy is zero then we know that the edge was horizontal, very nearly horizontal, or spanned less than one pixel. In any of those cases the edge will only confuse the render by introducing extraneous end points or it is too small to have any visual effect. So, we just ignore it. The next step feels like a kluge to me. To implement the "top and left in, right and bottom out" rule that keeps you from writing pixels more than once, you just don't draw the last scan line an edge crosses or the last pixel of each span that you do fill in. But, you want to cover the full range of X coordinates for an edge. So, you decrement idy by one. And, since you divide by idy you have to make sure that idy is never zero (thou shalt not divide by zero.) Like I said, it feels like a kluge, but it works. If you don't do it two dimensional texture mapping looks really bad. Finally we compute the change in X coordinates that occurs over the range of scan lines and starting with the first X coordinate we use linear interpolation to compute the X coordinates for the scan line crossings for each scan line. I've used fixed point arithmetic to compute the X coordinates. Some of the text books I've looked at use an integer algorithm based on Bresenham's line drawing algorithm to compute the X coordinates. The Bresenham approach works very well but it requires an add, a test, and either a taken jump or a fall through and another add, on each iteration of the loop. The fixed point algorithm only takes one add on each iteration. The fixed point approach is going to be faster no matter what kind of processor you are using. If you want to generate integer X values at this stage in the rendering algorithm you just need one shift to convert a fixed point number to a floating point number. I believe the Bresenham based technique has an advantage when you are building a hardware polygon renderer. At least it did in the days of MSI components. In the days of VLSI and ULSI custom chips I'm not sure that the fixed point technique doesn't always win. I know that several modern hardware polygon renderers use fixed point. Once you have generated all the scan line crossings for all the edges of a polygon you need to group them into pairs on the same scan line and fill them in. These pairs of scan line crossings are called spans. The following code fills in a span: void span(char c, int y, edgeRec *pt1, edgeRec *pt2) { int idx, ix1, ix2; if (pt2>x < pt1>x) { exchange(edgeRec *, pt1, pt2); } ix1 = adjust(pt1>x); ix2 = adjust(pt2>x); idx = ix2  ix1; if (idx == 0) { return; } do { fb[y][ix1] = c; ix1++; } while (ix1 < ix2); } Span() looks very much like the edge() function because it does the same things that edge() does, it just does it in the X direction instead of the Y direction. Instead of writing into a crossing table it writes into a variable named "fb" which is the frame buffer. You should wonder why I bother ordering pt1 and pt2 by their X coordinates if they've already been sorted by X and grouped into pairs. It's called defensive (paranoid) coding. I use this code in a convex polygon renderer and I don't want it to blow up if it happens to run into a concave or complex polygon. Before I show you a simple polygon rendering routine I want to look at what changes you need to make to use edge() and span() to render 3D polygons and use a Z buffer for hidden surface removal. typedef struct { fp14 x, y, z; } point; typedef struct { fp14 x, z; } edgeRec; void span(char c, int y, edgeRec *pt1, edgeRec *pt2) { fp14 z, dz; int idx, ix1, ix2; if (pt2>x < pt1>x) { exchange(edgeRec *, pt1, pt2); } ix1 = adjust(pt1>x); ix2 = adjust(pt2>x); idx = ix2  ix1; if (idx == 0) { return; } idx = max(2, idx  1); z = pt1>z; dz = fpDiv((pt2>z  pt1>z), int2fp(idx)); do { if (z > zb[y][ix1]) { fb[y][ix1] = c; zb[y][ix1] = z; } z += dz; ix1++; } while (ix1 < ix2); } void edge(edgeRec *table, point *pt1, point *pt2) { fp14 x, dx; fp14 z, dz; int idy, iy1, iy2; if (pt2>y < pt1>y) { exchange(point *, pt1, pt2) } iy1 = adjust(pt1>y); iy2 = adjust(pt2>y); idy = iy2  iy1; if (idy == 0) { return; } idy = max(2, idy  1); x = pt1>x; dx = fpDiv((pt2>x  pt1>x), int2fp(idy)); z = pt1>z; dz = fpDiv((pt2>z  pt1>z), int2fp(idy)); do { table[iy1].x = x; table[iy1].z = z; x += dx; iy1++; z += dz; } while(iy1 < iy2); } There is very little difference between the 2D and the 3D versions of the routines. In edge() you treat Z values just the same way you treat X values and in span() you treat Z values just the same way you treated X and Z values in the three dimensional version of edge(). There is a pattern here. Any value you want to interpolate for each span or each pixel can be treated just like the Z coordinate is treated in edge() and span(). The real difference between X and Z is in how they are used. In both the 2D and 3D cases the X coordinates tells us what to fill in. But, the Z coordinate lets us know if there is already something drawn into the frame buffer and if so, if it is in front of or behind what we are drawing right now. If you can afford the memory, a Z buffer is a very simple way to handle the hidden surface problem. It's accurate to the pixel level. If you are drawing lots of polygons it can be the fastest hidden surface removal technique there is. What follows is the simplest polygon routine I've ever seen. It is simple because it is an edge table algorithm and because it only draws convex polygons correctly. Edge table algorithms build tables of all the scan line crossings between local "extremes" of a polygon. Any time you have a corner of a polygon that points up or down, like the points of a star, you have a local Y extreme of the polygon. There are also X extremes of a polygon, but they don't effect the way we draw polygons. The top and bottom edges of a sheet of paper held upright are also Y extremes of a polygon. Another way of looking at it is that the slope of the edge leading into an extreme is the opposite of the slope of the edge leaving the extreme. And an extreme is a vertex or a series of vertices connected by horizontal edges. There are two kinds of extremes that we are interested in, top extremes and bottom extremes. Top extremes point up and bottom extremes point down. In any polygon there is always a path consisting of connected edges that leads from a top extreme to a bottom extreme and vice versa. More importantly, the slope of all the edges in the path have the same sign and they don't have any scan lines in common. That means that a simple vector of edge records can store all the scan line crossings for all the edges in the path from a top extreme to a bottom extreme. It turns out that a convex polygon has exactly one top extreme and one bottom extreme. If there were more than one of each then it would not be convex it would have to be concave or complex. Since there is exactly one top extreme and one bottom extreme there are two sequences of edges that we have to worry about. The sequence of edges leading from the top extreme to the bottom extreme and the one leading back on the other side of the polygon. That means we need to declare two edge tables: edgeRec table1[HEIGHT]; edgeRec table2[HEIGHT];
void poly(char c, int n, point *pnts) {
fp14 maxy, miny; int iy1, iy2; int imaxy, iminy; int pnt1, pnt2; int i; if (n < 3) { return; }
maxy = miny = pnts[0].y; imaxy = iminy = 0; for (i = 1; i < n; i++) { if (pnts[i].y > maxy) { maxy = pnts[i].y; imaxy = i; } if (pnts[i].y < miny) { miny = pnts[i].y; iminy = i; } }
iy1 = adjust(miny); iy2 = adjust(maxy); if (iy1 == iy2) { return; }
pnt1 = iminy; pnt2 = iminy + 1; if (pnt2 >= n) { pnt2 = 0; } do { edge(table1, &pnts[pnt1], &pnts[pnt2]); pnt1 = pnt2; pnt2 = pnt2 + 1; if (pnt2 >= n) { pnt2 = 0; } } while (pnt1 != imaxy); pnt1 = imaxy; pnt2 = imaxy + 1; if (pnt2 >= n) { pnt2 = 0; } do { edge(table2, &pnts[pnt1], &pnts[pnt2]); pnt1 = pnt2; pnt2 = pnt2 + 1; if (pnt2 >= n) { pnt2 = 0; } } while (pnt1 != iminy);
do { span(c, iy1, &table1[iy1], &table2[iy1]); iy1++; } while (iy1 < iy2); } This code is far from being production quality code. The subroutines need to be inlined and the loops need to be unrolled. But, even in its current form, the code is surprisingly fast for polygon rendering code. This algorithm can be extend (at a significant performance cost) to handle concave and complex polygons. I've include much more code in my column than I usually do, but there is more available. On the PCVR BBS you will find 3 complete programs poly2.c, poly3.c, and poly2t.c that contain code for drawing 2D polygons, 3D Z buffered polygons, and 2D textured polygons. The textured polygon code does not adjust for perspective so it won't work well for 3D graphics unless you are very careful. Poly2.c, poly2t.c, and poly3.c do not draw polygons directly on your screen. They draw using characters for colors into "char" arrays and then print the result as text. I find it is much easier to see what is going on in a graphics algorithm when I do it this way. It's very hard to see that something is wrong when the pixels are tiny dots on the screen. References:"Fast Scan Conversion of Arbitrary Polygons", Bob Wallis, in "Graphics Gems", edited by Andrew Glassner, Academic Press 1990. "Accurate Polygon Scan Conversion Using HalfOpen Intervals", Kurt Fleischer and David Salesin in, "Graphics Gems III", edited by David Kirk, Academic Press 1992. "Principles of Interactive Computer Graphics", William M. Newman and Robert F. Sproull, McGrawHill 1979. "Computer Graphics, Principles and Practice", second edition, James D. Foley, Andries van Dam, Steven K. Feiner, and John F. Hughes, AddisonWesley Publishing Company 1992. "Applied Concepts in Microcomputer Graphics", Bruce A. Artwick, PrenticeHall 1984. Copyright 1993, 1997 Robert C. Pendleton. All rights reserved.
