Experilous

While working on Worldbuilder v0.2, I spent a fair amount of time implementing an algorithm for generating distance fields on the surface of a sphere. It was admittedly a struggle, with many false starts, but I finally stumbled upon a solution that works well, producing a distance field with a high degree of accuracy, and executing very quickly with the help of the GPU.

Most of the literature I could find on the topic was focused either on generating distance fields for flat 2D images, or for full 3D space. In both cases, space was always Euclidean, whereas distance on the surface of a sphere behaves quite differently. Additionally, most algorithms I ran across were focused on calculating distances from a collection of points, but for reasons I’ll discuss below, I needed to calculate accurate distances from polygon outlines. Some of the 3D algorithms were tantalizingly close to what I wanted, since they often started with a triangle mesh as their input, but the 3D aspect greatly increased the complexity of the algorithms relative to my needs, while still not addressing my non-Euclidean needs.

In the end, the algorithm that I finally implemented is honestly nothing very impressive, and I kind of figured it out by accident. But it works, and it works well, despite being something of an ugly hack. As it might help others needing to do something similar, allow me to share the details of this algorithm, along with the journey getting there. Though admittedly, the journey does get a bit verbose at times, so feel free to jump straight to the final algorithm. I tried to keep that section relatively self-contained. Just know that you’re skipping over loads of pretty pictures. :-)

Contents, Because This Is Long

  1. But First, Why Distance Fields?
  2. Generating a Distance Field
    1. Diagonal Lines and Errors at Nearly Zero Distances
    2. The Medial Axis, Intense Mathematics, and So Close but Not Quite
    3. Distance Fields and Cube Maps
    4. Accidental Discovery of a Hacky but Effective Solution
  3. The Final Algorithm in Detail
    1. Apron Thickness
    2. Building the Apron Mesh (with pseudocode)
    3. Rendering to the Cube Map (with GLSL)
  4. Final Thoughts, Possible Improvements

But First, Why Distance Fields?

Map of India, with distance-based shading of the coastlines, obtained from the Wikimedia Commons.

A distance field, for those who aren’t already familiar with such a thing, is a useful rendering tool for any graphical feature that is influenced by distance to some point or shape. A popular use is with fonts, where they enable fast and high quality rendering of text, including with advanced outline, glow, and shadow effects, among others. A distance in this case is the shortest distance to the nearest edge of a glyph.

Map of Australia, with distance-based shading of regional borders, obtained from the Wikimedia Commons.

At some point I realized that they could be very useful for planet and map rendering also, especially when applying artistic styles rather than something aiming strictly for realism or raw data representation (such as viewing elevation or temperature). For example, many map styles will color the water based purely on distance to the coastline, as with this map of India. Sometimes it is a subtle effect, but small things like this can add up to an overall aesthetic quality, while leaving them out simply results in a bland map.

Map of the coastline and the islands off Zadar, obtained from the Walters Art Museum.

Political maps frequently use a similar effect to highlight borders between political units. The distance to the nearest political border can be used in a variety of ways. If a darker color is used for any pixel within a specific distance, you’ll end up with a solid line of whatever thickness you choose. If you instead fade to a darker color the closer you get to the border, then you’ll get a softer outline of the political region, such as in this map of Australia.

More artistic styles can be achieved by layering colors at incrementally further distances, creating a contoured effect. This ancient map of Zadar layers together a thin black line, with thicker bands of gold, red, and pink or other similar color combinations.

Though I do not yet have such map styles available in Worldbuilder, the plan is to eventually support them, and distance fields will be at the heart of that rendering.

The Journey Toward Generating a Distance Field

A portion of a distance field (top) and the outline from which it was generated (bottom).

A (two dimensional) distance field is generally a grayscale bitmap, with the intensity of each pixel representing the shortest distance at the pixel’s location to whatever shape the distance field represents. Black is a distance of zero, meaning that the pixel is on or in the shape, while white is the farthest representable distance away from the shape. The method of generating a distance field is dependent upon the input that you start with, though many algorithms expect monochrome or grayscale bitmaps as input also. To use these algorithms in cases where your input is not in this form, you’ll need to transform your data first; this may be incredibly simple in some cases, but complex in others. These algorithms also tend to assume that a black pixel represents a zero-dimensional point, rather than a one-dimensional line or two-dimensional area, which means that you’ll lose possibly important information when transforming from your initial format to a grayscale bitmap.

I began my journey a few months back planning on using one of these standard 2D algorithms, hoping I could adjust the math just enough to make it sufficiently accurate when stretched across the surface of a sphere. Initially, I was generating an individual distance field for each object of interest, which in the beginning only consisted of tectonic plates. So for each tectonic plate, I configured a camera matrix placed at the origin of the planet looking toward the tectonic plate’s center point, and I rendered the outline of the tectonic plate in black onto a white render buffer. I would then pull the render buffer back into main memory and run a distance field algorithm on it, giving me texture data that I could push back to the graphics card for later use while rendering.

The distance field algorithm I chose was from the 2012 paper Distance Transforms of Sampled Functions, by Pedro F. Felzenszwalb and Daniel P. Huttenloch. This is an impressive algorithm; it has that elegance to it that computer scientists adore. And more practically, its computational complexity is linear with respect to the number of pixels being processed. It’s also quite easy to parallelize on the CPU, since each row of the input is processed independently of the others. Though I’m not skilled enough to see how to effectively move the computations onto the GPU, since it has a highly conditional loop structure that I doubt GPUs would appreciate. But it was definitely good enough for my early experiments.

Diagonal Lines and Errors at Nearly Zero Distances

Distance contours around a collection of points.

Unfortunately, I soon discovered that this type of algorithm wouldn’t work well for my intended purposes. The culprit was the interpretation of input pixels as zero-dimensional points. Any information about the polygon that generated the input bitmap was lost in the conversion from vertices to a rasterized image. When observing the resultant distance field with the naked eye, the errors are imperceptible. They are most prominent at the smallest distances, in the nearly black regions of the distance field, so it would be easy to overlook them. These errors occur when a diagonal line is converted into a stair-step pattern of pixels in the input.

Distance contours around line segments; discrepancies highlighted.

The first image shows contour lines for distances of 1, 2, and 3 pixel units from the set of points provided. Note how the contour lines are quite busy at a distance of 1 unit, but begin to smooth out quickly as distance increases.

In the second image I have overlaid the contour lines that would have been generated from the original lines that were rasterized into point pixels. There are clearly a number of discrepancies between the two contours. The areas circled in red are areas where the distance is calculated to be larger than it ought to be, because the algorithm is only calculating the distance to the nearest pixel center, oblivious to the fact that it is closer to the midpoint of the line segment than to either of the end points. The green area is particularly severe because the points generated from rasterization don’t even lay on the lines that produced them. The discrepancies in blue are ultimately irrelevant, because they disappear at pixel centers, where distances are actually evaluated.

But how bad are the errors when the distance field is actually used to render stylized borders? Unfortunately, they’re pretty bad, at least when zoomed in. Below left is the result with the above errors, and to the right is the intended result, which I was eventually able to obtain. Undeniably noticeable, so I was stubbornly determined to avoid this problem.

Jagged/wiggly borders from an inaccurate distance field, and clean straight lines when using an accurate distance field.

The Medial Axis, Intense Mathematics, and So Close but Not Quite

I decided to solve this problem by calculated distances directly from the original shapes, rather than from the rasterized point fields generated from them. I realized that if I could break a polygon down into regions such that all the points within a single region were known to all be closest to the same edge or point, it would be rather simple to encode that information into triangle vertices, and calculate the distance per pixel in a fragment/pixel shader.

After thinking, researching, and receiving some terminological assistance from Igor Brejc, I learned that a medial axis was the geometric construct I was looking for. The medial axis for a shape is the set of all points within that shape which are equidistant to two or more nearest points on the shape’s boundary. Which means that all the points between the medial axis and the boundary are each closest to exactly one point on the boundary, precisely what I’m looking for. Below is a slight variation of the medial axis, calculated from a randomly generated two-dimensional polygon. The variation is for points of concavity. The two lines emanating out from such points wouldn’t exist in the actual medial axis, but they are important for my purposes, because they separate three regions. The middle region between these two lines contains points which are all closest to the singular concave point, while the two neighboring regions contain points which are closest to the respective line segment.

A variation of a medial axis for a randomly generated polygon, with the corresponding grayscale distance field inside.

You might notice that for the regions along a line segment, the distance field is a simple linear gradient, while for those that are wedges focused on concave points, the distance field is a radial gradient. This fact is what makes it easy to calculate distance within a fragment shader, as you’ll see near the end of this article. And since this distance is directly based on the original geometry, it maintains accuracy at all distances.

A medial axis variant and distance field for a highly complex polygon on the surface of a sphere.

Generating the medial axis was a daunting task. I started with just the two-dimensional Euclidean case, and once I had that working, I adapted it to work with polygons on the surface of a sphere. Both cases had their struggles. I won’t go into too much detail about the algorithm I developed, because ultimately it didn’t work reliably enough, but it did give me the understanding I needed for developing my final solution.

There are two primary mathematical operations to be performed in my computation of the medial axis. The first is to determine the equation of the medial line dividing two boundary components. If the two boundary components are both line segments, then their division is simply the straight median line halfway between them. If they are both concave points, then the division is again just a straight line halfway between the points, orthogonal to the line connecting the two points. But if one of the boundary components is a line segment and the other is a concave point, then the dividing line is parabolic, due to the relation between the focus point and the directrix line. Calculating this line isn’t very hard, but using it later on will prove to be obnoxious.

The second major mathematical operation is finding the nearest intersection point of two of these medial lines. Intersecting two straight lines is trivial, and there will only ever be one intersection point, so the closest one is obviously the only one. Intersecting a straight line and a parabola is messier, but it simplifies down to the quadratic equation which is easy to use, and determining which of the potential two intersection points is closest is a simple check. (Note that for numerical stability, it’s better to combine the quadratic formula with the Citardauq formula.) But intersecting two parabolas with each other? Ugh. This requires finding the roots (up to four of them) of a quartic equation. There does exist an exact formula for finding the roots, similar in nature to the quadratic formula, but it is impractically huge. It may also have undesirable numerical qualities for all I know, and I doubt they’d be as simple to avoid as with the quadratic formula. I eventually found an iterative algorithm I could comprehend and implement, Bairstow’s method, finally getting me past that mathematical hurdle. (I had previously been trying to solve this geometrically, but that was turning my brain inside out and leading nowhere.)

For polygons on the surface of a sphere, the operations are similar, but the primitives are different. Straight lines get replaced by great circles; these can easily be represented by a three-component vector which is the normal for the plane that contains the great circle. An additional vector, orthogonal to the normal, can be used to specify the starting point of a great circle ray, when that is needed. Parabolas get replaced by a closed curve which results from the intersection of an elliptical cone (apex at the origin) with a unit sphere (also centered on the origin); for my purposes, I just stored the parameters of the elliptical cone itself. Intersection of a cone with a plane (which passes through the origin) or a cone with another cone results in up to 2 or 4 origin-anchored rays; normalizing them then gives the point of intersection on the surface of the unit sphere.

Intersecting two elliptical cones wasn’t pretty. I ended up intersecting one cone with an elliptical slice of the other cone. This allowed me to use the trigonometric definition of an ellipse, and from there I needed to treat it as a Fourier series and find the complex eigenvalues of some matrix. To be honest, I was mostly in over my head at this point, but I managed to pull through. I just don’t understand what I was doing well enough to describe it. My version control notes aren’t much help, either. I’m a software engineer, not a mathematician, hehe.

Anyway, with these two mathematical tools in hand, medial lines and their intersections, the general algorithm is the following:

  1. Create an array of open regions, one for each line segment of the polygon, as well as one for each concave point. A region will track the two chains of medial lines emanating from the polygon boundary toward the interior.
  2. For each pair of adjacent regions, determine the medial line that separates them. Two line segments sharing a convex point use a simple medial line, while a line segment ending with a concave point will use a straight line orthogonal to the line segment as the medial line. (No parabolas are introduced in this initial pass of medial lines.)
  3. Each region will now be associated with two medial lines, one on each side. Calculate the intersection of this pair of medial lines, and record the location and distance of the intersection. Record an infinite distance if they do not intersect.
  4. Select the open region with an intersection that has the smallest recorded distance. If there are no more open regions, exit the loop. If all of the remaining open regions have an infinite recorded distance, meaning no valid intersection, then you’ve been burned by numerical instability.
  5. Mark this region as closed; the medial lines on its two sides have converged, creating a fully closed area.
  6. Find the two open regions that are nearest to this region, one to each side.
  7. If the two open regions are also adjacent to each other in the list of open regions, close these two regions also, and return to step 4.
  8. Determine the medial line between the two regions, based on the line segment or concave point that is the base of each region.
  9. Add this new medial line to the appropriate side of each of the two regions.
  10. This will supersede the previous medial line on that side of the region, so for each of these two regions, recalculate the intersection point of its two innermost medial lines. Replace the regions’ previous recorded intersection points and distances with these new values.
  11. Return to step 4.

Once all the regions have been closed, I can walk up both sides of each region building a triangle mesh for that region. That mesh can then be rendered using a fragment shader to compute the distance (I have actual GLSL code for that further down). Parabolic edges need to be subdivided somewhat to approximate the curve.

A failure to generate a medial axis.

However, it was the caveat in step 4 (along with a few other integrity checks) that persistently drove me crazy trying to get this algorithm to work reliably. Each time an intersection point is calculated and a new medial line is determined extending from that intersection, small amounts of numerical error accumulate, which can eventually manifest in a puddle of confusion when the final open regions converge in the center of the figure. Worse still, inconvenient narrow points between a pair of concave sections can disrupt this algorithm further, generating incorrect intersection points and throwing everything out of whack.

I eventually got it to usually work well enough, but I was still haunted by the occasional shape that would severely break everything. After a bit of inspiration, I thought I might be able to take the earlier pixel-based algorithm and adapt it to cube maps. That algorithm at least has the attribute of being incredibly reliable and predictable, even if it does have the accuracy problems described. I also had some ideas about trying to minimize those errors, so I ditched my medial axis approach and went to work on some inventiveness.

Distance Fields and Cube Maps

A cube map is nothing fancy; it’s just six individual square textures of equal size, one for each surface of a cube. Among other purposes, a cube map can be used to represent a texture on the surface of a sphere, with only a moderate amount of distortion, but very easy storage and lookup. This is the route I eventually went for storing data across the surface of the planet.

When using the medial axis method above, generating the final distance field requires rendering to a texture. This is trivial when the texture is a rectangle. Rendering to a cube map is a bit more involved, but only a little. The naive method is to simply render the same mesh six times, once for each face of the cube. With a little bit of care, one can also do some frustum culling and thereby lower the triangle count, but I’m pretty sure I was fillrate limited, not triangle rate limited, so it wasn’t an urgent optimization. This also isn’t something that has to be done per-frame within a 60 FPS context, so even if it takes 100 milliseconds, that’s not actually a catastrophically huge number for me.

When using a more conventional algorithm, cube maps present an interesting challenge, because pixels aren’t computed in isolation, but as part of a larger process which takes multiple pixels into account. With the linear algorithm I mentioned earlier, processing is done independently on one-dimensional slices of the input image; first on all of the rows, and afterward on all of the columns. This is simple enough when working with an ordinary rectangular image, but what counts as a row or column in a cube map?

I realized I could process one-dimensional rings, each ring consisting of four rows/columns, one from each of four faces. I also did some extra backward and forward propagation in order to get the wrap-around effect. (This really uglified the code.) In total, there are three sets of rings: those that include the front, back, left, and right faces, those that include the front, back, top, and bottom faces, and those that include the left, right, top, and bottom faces. I thought I’d be able to reuse a lot of this computation, but in the end I needed to run this algorithm twice over each of the three sets of rings. That meant that every pixel was processed a total of 24 times. With a cube map where each face is 1024 by 1024, that’s a total of 151 megapixels of processing. It’s a good thing this algorithm is linear! At that resolution, it was reasonably snappy; only took around a second if I remember correctly.

The next task was to tackle the inaccuracies described above. I don’t think I was aware before this point that the ugly effects would be as bad as they were; otherwise I might not have dropped the medial axis in favor of returning to this algorithm. Regardless, I got the cube map stuff working, so I was going to see if I could make it work.

The conventional approach with distance fields is to generate them at high resolutions, and then downsample to the desired final resolution. A 4 by 4 downsampling seemed sufficient to give me the accuracy I need, and since I was aiming for a final cube map face size of 2048 by 2048, that meant an initial face size of 8192 by 8192. Unfortunately, this was 64 times as many pixels as my 1024 by 1024 numbers above, and sure enough, it took well over a minute to process. This could perhaps barely be considered adequate for my purposes of random planet generation, but I still would rather not have my users wait for over a minute every time they regenerated a planet. Worse, I knew I would eventually need more than one distance field per planet. Tectonic plates would use one; coastlines another; political boundaries a third, and possibly more if there were different types of political regions; a tiling effect could benefit from one. I decided that a 10 minute generation process is completely unacceptable, and so I started cranking the gears in my head to come up with a more clever solution, one that could run at the target resolution of 2048 by 2048.

I first tried saving a variety of sub-pixel information when rasterizing polygons to pixels, so that I could preserve most of the polygon’s detail. Things like the offset from pixel center to the nearest point on the polygon or the angle of the outline at that point, for pixels which intersect the polygon’s outline. The Felzenszwalb & Huttenloch algorithm is structured such that I could use this data in relevant ways, mostly as sub-pixel offsets that could be applied and propagated. These attempts weren’t complete failures, but I couldn’t find a scheme that enabled lines of all angles to look equally nice. There were always particular angles that were just as bad as without any sub-pixel adjustment at all.

I then wondered if I could get away with just being lazy, and run some sort of post-processing blur to soften the contours after running the distance field algorithm. I tried to make it smart enough to recognize that the expected gradient should be constant, and so the amount and type of averaging should be dependent on how far from this expected constant a pixel’s value is, when compared to its neighbors. For example, consider a common case where a pixel’s recorded distance is 1, it’s neighbors up and right are 1.4, and it’s neighbors down and left are 0. I can see that the distance of 1 is too large, and 0.7 is going to produce a much more constant gradient for this pixel.

That was the theory. I don’t recall it working as well as I had hoped in practice. Fixing one pixel’s gradient screwed up another’s, and pixels where the distance field legitimately changes direction (on the outline of the polygon and on the polygon’s medial axis) were also problematic.

After all of this effort, this was the point that I gave up. I decided that I had stubbornly spent far too much time on a feature that was merely a nice-to-have, not something critical to core functionality. So I made peace with the poor quality when zoomed in, and chose to live with that compromise. Time to get on with other features.

Accidental Discovery of a Hacky but Effective Solution

The very next day, I decided to tackle the generation of a distance field for a tile grid, so that I could overlay this grid on top of the primary visualization layer, and style it however I wanted (thin or thick lines, soft edged or solid, singular or grooved, colored, et cetera). But the nature of this particular distance field produced particularly egregious visual artifacts when using my above compromise. This was mostly because the distances closest to zero were even more important than in my first use of distance fields with tectonic plates, and these are the distances with the least accuracy. For this application, I absolutely needed to preserve the polygonal information that is lost during rasterization.

I decided to try out a hybrid approach: Use something like the medial axis method to generate an “apron” inside each polygon, thick enough to cover a few pixels from the outline. This ensures that the pixels with distances close to zero maintain maximum accuracy. Then finish off the interior of the distance fields by using the per-pixel method that works reliably for any shape. This way, I avoid all the crazy and numerically unstable math involved in the medial axis computation, but still get the accuracy that it provides in the areas where it matters.

To test this idea out, I naturally started with the first step, generating the apron. If you remember above, the first pass of the medial axis algorithm I developed does not involve any parabolas. It’s all just straight lines, or when working on the surface of a sphere, just great circles. So this first pass is what I focused on.

Since I was only doing one pass around the outline and wasn’t doing any internal intersections to actually find the medial axis, I had to find another way to close off each region. For line segments on the outline with at least one convex end point, the medial lines on its left and right will converge, so a single triangle will suffice. If the convergence point is further away from the base of the triangle than the intended apron thickness, or for line segments between two concave points, a pair of triangles can be used to draw a quadrilateral, of size based directly on the apron thickness chosen. And for concave points themselves, a fan of triangles can fill the wedge, again using the apron thickness as the radius.

One flaw with this method is that there are plenty of circumstances where the apron will overlap itself, and will therefore overwrite itself with incorrect distances. But it turns out the solution is quite simple: when blending a new pixel with an existing pixel, only do so if the new pixel has a smaller distance than the old one. In OpenGL, this is the GL_MIN blend mode; for DirectX 11, it is D3D11_BLEND_OP_MIN. Below you can see the result of this process. Note how the triangle mesh gets rather messy in busy area, but the resulting distance field remains clean and accurate.

A polygonal “apron” triangle mesh (left), and the resulting incomplete distance field (right).

With that working, I was ready to see the effects of finishing off the interior of the distance field with a more conventional algorithm. But before I tackled that task, I got curious. What if I simply increased my apron thickness to be high enough to cover the entirety of every polygon, making a second stage of processing irrelevant? Using the minimum blend mode should at least allow correct functioning, even if my overdraw will be terrible. Given that it was a simple change to a constant value, I gave the experiment a try. Here you can see the results with a moderate increase in apron thickness, followed by more a more aggressive increase.

A thicker polygonal “apron” triangle mesh (left), and the resulting incomplete distance field (right).

An extremely thick polygonal “apron” triangle mesh (left), and the resulting distance field (right).

As you can clearly see in the left images, the overdraw of the aprons becomes rather absurd. And yet the output is exactly what I want, and has near-perfect accuracy at all distances. And in practice, despite the horrible overdraw, it still tends to function in only a few hundred milliseconds, vastly superior to the minute+ that I was experiencing when getting similar quality from an 8192 by 8192 by 6 cube map downsampled to the same resolution. Of course, this is almost entirely due to the fact that it is hardware accelerated, using the GPU in pretty much precisely the way the GPU was designed to be used. All I’m doing is drawing triangles with a fairly basic fragment shader using a standard blending mode.

The Final Algorithm in Detail

Wow, yeah, that was a huge preface. There are lots of interesting ideas in there, I’m sure, tidbits of information that the adventurous might be able to put to use, but I don’t blame anyone who skipped straight to this section.

Full working source code (albeit designed specifically for my own framework) is included in my Worldbuilder planet generation software, and is open sourced under the Apache 2.0 License. It is also included in the free demo, so you certainly don’t have to pay to get your hands on it. In version 0.2.1, the relevant code is in the file ModuleSuites/BasicModuleSuite.lua, from lines 1562 to 1917. Or search that file for the text "function operations.CreateDistanceField(". The GLSL code is listed in the same file but separately; lines 1393 to 1429, starting with the text "function operations.CreateDistanceFieldGeneratorShaderProgram(".

Given that this is designed for the surface of a sphere, all vertices of the polygons are represented as unit vectors, and the output is a cube map texture. For the most part, adapting this for two-dimensional use should be a simplification. One exception is the fragment shader, which as you’ll see below is able to take advantage of a property of spheres; one would need a different approach for a flat surface.

Apron Thickness

As described in the prior section, we need to generate an internal apron mesh within each polygon which is to be represented by the distance field. This apron needs to be thick enough to guarantee that the entire polygon will be filled, but we don’t want it to be too large, or else the GPU will have more work to do with no benefit. The minimum thickness is dependent upon the size and shape of the polygon; small or thin polygons allow thinner aprons, while larger and rounder polygons will require thicker aprons.

As a simple approximation, I currently just estimate the center of the polygon, searching for the vertex that is farthest from this center, and then using the arc length between the two. Unfortunately, to my knowledge, estimating the center of a polygon on the surface of a sphere isn’t trivial. Simply summing together all of the vertices and normalizing the result works well for small polygons. But for near-hemispheres, the pre-normalized vector is nearly zero. And for polygons larger than a hemisphere, the estimated center is actually the center of the area outside of the polygon, while the intended center is on the opposite side of the sphere.

To avoid this latter problem, the winding order of the vertices needs to be taken into account, to know which side of the polygon’s outline is considered the interior. Cross products are useful here, since they are sensitive to the order of their operands. As for the former problem of near-zero length vectors, I have not yet found a single solution, so instead I try one solution first, and if its resulting vector is too small before normalization, I switch to a second solution that is robust in exactly the conditions where the first one is weak.

Both solutions involve the summation of cross products. The first solution calculates the vector from each vertex to the next, normalizes these vectors, and then takes the cross product of adjacent vectors. These cross product results will point out of the sphere for convex vertices, and into the sphere for concave vertices, so these tend to cancel each other out. But one or the other will dominate, and the normalized result is a reasonable estimate of the center’s position, including which side of the sphere the center is on.

However, if the polygon is a near-hemisphere, the above solution begins to break down. The pre-normalized vector is nearly zero, and is thus strongly affected by subtleties of the various cross products, causing the estimate to be rather poor. So if the result of the above process is too short before normalization, it’s time to try the second solution. This one is actually easier: Take the summation of cross products of each vertex and its following vertex; normalize the final sum. It’s highly reliable for near-hemispherical polygons, but degrades for either really small or really large polygons.

I default to the former solution first because the majority of my polygons fall under its jurisdiction. For cases in which the second solution is needed, I’ll have fewer total polygons anyway, so it doesn’t much matter if I waste time with the first computation before trying the second one. And in the end, this is such a small part of the overall process, computationally speaking, that I’m probably wasting time even thinking about it at all.

Building the Apron Mesh

Once a thickness has been chosen, the apron mesh can be constructed. In the source code included with Worldbuilder, I have divided the process up into multiple stages in order to make it easier for the LuaJIT compiler to do its job and provide effective optimizations, but for this description I’ll bundle up all the work into one pass around the vertices of the polygon.

The operation proceeds one line segment at a time around the outline of the polygon, and needs information from the previous and next line segments also. One of the major pieces of data we need to calculate is the normal vector of each line segment, which is tangential to the sphere’s surface. We’ll also need to know if each vertex is convex or concave. Here is Lua-style pseudo-code for how one might structure the loop around the full polygon:

The convexity check is performed by examining the angle between the normal vector of the previous line segment with the direction vector of the next line segment (provided by the cross product of the vertex and the normal of that next line segment). The dot product will have the same sign as the cosine of the angle between these two vectors, thanks to the law of cosines. If the angle is less than 90 degrees, then the cosine will be positive. In this case, the angle between the two line segments themselves is less than 180 degrees, and thus the vertex is convex. If the angle between the normal and direction vectors is greater than 90 degrees, then the angle between the line segments is greater than 180, therefore the vertex is concave. In this case, the cosine will be negative. The special case of 0 degrees (when the two line segments don’t change direction at all, aside from the spherical curve) can be handled either way, but treating this as convex works out most conveniently in this context.

Once we have access to four adjacent vertices, the normals of the three connecting line segments, and the convexity of the two middle vertices, we can determine the triangles that need to be added to our apron mesh. This occurs in two parts. The first is to add either a triangle or a quad alone the middle line segment, depending on how convex the neighboring two vertices are. The second is to add a fan of triangles around one vertex if it is concave.

To decide between the triangle or the quad, we need to get the two medial lines between the first/second and the second/third line segments. If the second vertex is convex, the first medial line is simply halfway between the normal vectors of the first and second line segment. This can be easily calculated by adding the two normals together and normalizing the sum. But if the second vertex is concave, we know that there will be a triangle fan around that vertex which ends along the normal vector of the second line segment, so we can just treat that normal vector as the medial line. The logic for the second medial line is similar, checking the third vertex for convexity, and using the normal vectors of the second and third line segment. The only difference is that in the case of a concave point, the normal of the second line segment is used again, instead of the third.

Now that we know the line segment and the two medial lines extending from its two endpoints, we almost have enough to create a triangle from them. But depending on how aggressively the two medial lines converge and intersect, we might have a really long narrow triangle that extends well past our chosen apron thickness, in which case we should truncate the triangle and save our GPU some pressure on the fillrate. So we can compute where the medial lines would end if we truncate them according to the apron thickness, compare that with where the two medial lines intersect each other, and then pick the shorter of the two. If the intersection is shorter, we have three vertices to define a triangle: the line segment’s two end points and the intersection point. Otherwise, we have four that will make a quad: the line segment’s two end points and the truncated locations of the two medial lines. I’ve abbreviated some of the calculations, but here’s the essential work:

To calculate the triangle fan, we need to subdivide the wedge into multiple triangles, dependent on the maximum size of the arc we want each triangle to cover. It’s probably safe to let this maximum arc be fairly large, minimizing the total number of triangles generated for the fans. The rest of the work is just standard vector manipulation and trigonometry to sweep the outer vertices around the pivot vertex.

Rendering to the Cube Map

With the triangle mesh constructed, we’re ready to pass the data off to the GPU to render. Because this algorithm is rather naive and heavy on the fillrate, I discovered that it is vitally important to divide this up into multiple draw calls. Otherwise, when a single draw call takes too long, my Windows 7 system either becomes obscenely unresponsive, or the OS decides to kill the display driver for taking too long. This behavior is probably highly dependent upon OS and drivers, but I learned that it is definitely something to be aware of.

Otherwise, this is mostly an ordinary case of rendering to a cube map. Creating the cube map and setting up the camera matrices will follow the conventional form for whatever framework or API you are using. The cube map should use single-component floats as its texel type. Set the blend mode to select the minimum value of the existing or new pixel value, and clear the entire render target to 1.0 (the maximum distance that can be calculated). You can optionally do some culling, rendering to only a subset of the cube faces for each draw call you make, but I haven’t yet gone that far. Just as I mentioned earlier, I’m pretty sure this process is very much fillrate limited, so I suspect that reducing the triangle count is going to have a negligible effect.

The interesting aspect of this stage is the shader code. The vertex shader is rather simple, just passing the vertex information on to the fragment shader. Each vertex has two attributes: A typical 3-component vector of the position, and a 4-component vector that encodes information about the point or edge of the polygon outline that is closer than any other to all of the points within the triangle being rendered. This second attribute is what the fourth parameter in the above calls to AddTriangle() is all about. It is also the same for all three vertices of a triangle, making interpolation is irrelevant, hence the flat qualifier.

The fragment shader is where the real fun happens. It is essentially calculating the arc length from the location of the fragment to the nearest location on the polygon outline, using the arctangent formula for maximum accuracy.

As stated, the 4-component vector encodes information about this nearest location, but it’s a little tricky. The xyz components are a simple vector location, and for triangles in a fan around a concave point, they are the concave point, and thus the nearest point on the polygon outline. Calculating the fragment distance would be a direct application of the linked arc length formula.

But for triangles which are closest not to a single point, but rather to a line segment, we can’t store the nearest point, because there is no single nearest point. Instead, I realized that I could store the unit normal vector of the plane that passes through the line segment. Since this is on the surface of a sphere, I know that this normal vector is exactly 90 degrees away from any point on the line segment itself. Therefore, I can calculate the arc length from any other point to this normal vector, subtract this length from the full 90 degrees (or π / 2 radians), and whatever I have left over is the arc length from the original point to the line segment itself.

Since we have these two different types of triangles to handle, we could just stick an if statement into our shader, but it’s good to avoid that if possible. We could alternatively have two separate triangle lists, one for triangles/quads along line segments, and another for triangle fans around concave vertices. Thinking about this option, I suspect it would actually be a decent optimization, especially in circumstances when most polygons have no concave vertices at all. (One of the two distance fields I’m currently computing in Worldbuilder falls into this category.)

Instead, I decided to use the w component of my 4-component vector attribute as a multiplier, and it works well enough. When this component is zero, the fragment distance is simply the distance we already calculated, appropriate for the concave vertex case. When it is one, the formula simplifies down to halfPi - d, appropriate for the line segment case.

It’s also a flexible enough scheme that I can take the exact same triangle input, change the fragment shader around a bit, and produce a distance field that is instead based on Euclidean distance, getting rid of the expensive call to atan(), or even better, one based on squared Euclidean distance which would also avoid a call to sqrt(). This can be useful when we only want to use a distance field for rejecting fragments in future draw calls. For example, I have considered using the same process for calculating tectonic plate effects extending inward out from the boundaries between tectonic plates. But in this advanced case, my output won’t be a simple value like distance that can be resolved using the GL_MIN blend mode. Thus it would be convenient to quickly calculate the squared Euclidean distance for each fragment, and if it is greater than the same value already recorded in the earlier distance field generation, then this fragment can be rejected without performing any tectonic effect calculations. Whatever edge this current triangle thinks it’s closest to, this particular fragment isn’t in fact closest to that edge; a future (or already rendered) triangle will contain this fragment and will have the true closest edge information encoded.

Final Thoughts, Possible Improvements

So that’s it! Pick an apron thickness, build an apron triangle mesh, and render to a cube map with the above arc length fragment shader, and the result will be a full spherical distance field, with distances ranging from 0 to 1 (with 1 representing the maximum possible shortest distance between two points on a sphere, separated by an angle of 180 degrees). If you were to stitch the six faces together, you might end up with something like this:

The six faces of a distance field cube map, generated from simulated tectonic plate polygons.

When mapped onto a sphere, the above distance field looks like the left image below, and can be used to generate a wide variety of stylistic representations of the shapes used to generate the distance field. Two such styles are demonstrated in the middle and right images below. (The color for each region was determined by a process separate from creating or using the distance field.)

A distance field mapped onto a sphere (left) and used in a couple of different stylized renderings (middle, right).

If I didn’t mention it enough already, I want to emphasize that this method, in the incarnation presented above, is stupidly inefficient, and is only as fast as it is probably because it uses the GPU precisely as it is designed to be used. I feel a little dirty using it myself or telling others about it, but hey, if it works, it works, right?

More importantly, I believe that the underlying concept is sound, and there are probably numerous ways to clean it up and reduce the overdraw. Although the bulk of the written code executes on the CPU, the majority of the computation time occurs in the really short fragment shader. So spending some extra time on the CPU generating a more efficient triangle mesh could significantly decrease the amount of fragments the GPU has to compute and then overwrite. Generating a medial axis is in fact the ideal case, because it provides perfect coverage with absolutely no overdraw. When this can be guaranteed, the minimum blend mode can even be dropped in favor of an unconditional write to the render target.

But short of getting the medial axis algorithm working, there are other partial improvements that could be achieved. A more intelligent computation of the minimum safe apron thickness could go a long way for situations where many polygons are long and narrow, because my above estimation is very cautious and recommends a much larger thickness than is necessary for these types of polygons. Also, from some of the earlier pictures of generating the apron, you can see that there is a lot of overlap in regions where the polygon outline is extra crinkly. A simplified version of computing an approximate medial axis which allows some interior overlap, knowing that the blend mode will handle it properly, might be capable of producing a much more reasonable triangle mesh. At the time I was originally working on the medial axis code, I hadn’t thought of just using the blend mode to rectify imperfections, so I was aiming for a highly accurate medial axis. Now that I have this additional tool at my disposal, a revisit of the medial axis approach might yield better and more robust results with simpler code.

There’s also the two-dimensional case which could benefit from the same type of GPU-enhanced computation of distance fields. As I said above, it should mostly be a simplification of the spherical case. But my method of using the plane normal and subtracting the computed distance from 90 degrees would need to be rethought, since the equivalent process for 2D would be to have a point infinitely far away from the origin, and subtract the computed distance from infinity. Not a viable strategy. In this case, the approach of building two triangle lists would be far more appealing and natural.

Additionally, it would be nice to support shapes beyond straight-line polygons. I’m pretty sure that the concave vertices can be seen as a special case of a circular arc with a radius of zero. So it might not be too hard to support shapes consisting of straight lines, sharp turns, and circular arcs of various radii, which would provide a notable improvement in flexibility. Quadratic and cubic B├ęzier curves would be icing on the cake, and would make support for vector fonts pretty easy.

Finally, disconnected points and lines would be nice, rather than requiring everything be a closed shape, but that further complicates the process of finding a medial axis or even just estimating a minimum sufficient apron thickness. But I know it can be done because it already is being done (generating a medial axis of arbitrary points, lines, and shapes); I just haven’t yet had the insight, skill, or time to do it myself in my own codebase.

If anyone is able to use or extend some of the methods described in this post for their own purposes, or have ideas for uses or improvements, I’d love to hear about them in the comments. Thanks for reading!

4 Comments

Comment by Mark Erikson — 2015/08/22 @ 00:19

I have absolutely no use for this myself, and the math is well over my head. However, having gone off on a number of somewhat-crazy programming deep dives myself, I understand something of the thought process involved.

In any case, thank you very much for the excellent writeup, AND for open-sourcing the results. Again, I can’t imagine ever needing to use this, but I appreciate that you took the time to share the process, the algorithm, and the code.

Comment by Black — 2015/12/30 @ 17:25

I think you’re approach has merits but became needlessly complicated. Finding your errors with the grid they could be avoided in two ways. The first is to use a better method to find distances. I think the picture at the bottom of http://github.prideout.net/distance-picking/ is fairly succinct. Also your really running into aliasing issues so you could calculate at 4x and sample down. The second is to use coordinate transforms like in your cube map. If you operate in spherical coordinates and replace the normal distance function with the arc length and calculate angles mod max_angle to the nearest <=180 you can achieve wrapping without considering it. Depending on how you want to transform from the distance to arc length… notice that for a given chord with straight-line distance there is only one arc length for a given radius. Then just write your cube map as normal using this function.

Comment by Aiekick — 2017/04/08 @ 15:21

Nice article.

but i found a better way i think to generate a distance field

see my shader https://www.shadertoy.com/view/MsjyWR

i used this df rendered in a framebuffer to my need.

I just put the array of points and the count as uniforms in the shader.

this work with concave and convex shape.

you did not have experimented solution un pure graphic way like glsl ?

Leave a comment