While I think of what to do with A. CYBORG, time to switch gears. I had a brill idea for a graphics trick to do with my old landscape engine thingie and have been working on it fairly obsessively this weekend. This is only a proof of concept, but I can’t get over how well it came out.
The executive summary: notice the nice, soft lighting on this chunk of land? How it’s darker in valleys, a little dark around the edges of the mountains, and so forth? And that, in turn, makes it look like much more of a real object than a computer-generated one? That’s called ambient occlusion, my friends. A point at the bottom of a valley is only going to see a small chunk of the sky, and will therefore be darker than a point at the top of a hill which has a full view of the sky. And that’s what’s going on here.
To implement this, I added a step to landscape generation. For each point of landscape geometry, we plant a camera at that location looking directly up and with a wide field of view, and render the environment colored solid black against a solid white background. That gives us an occlusion map at each point, where the white pixels indicate how much of the sky is visible and in what directions. In order to test out how this looks, I am sampling the occlusion map down to a single pixel, whose level of brightness indicates how much sky is visible overall; that gives us the ambient occlusion level at each point, and thus the nice results above. (The old version of Air manually calculated occlusion by raytracing across the terrain in a couple of cardinal directions. It sort of worked, but wouldn’t catch obstructions that weren’t located along a main axis and so you’d get noticeable artifacts. By contrast this technique could handle anything — spires, walls, even bridges or other obstructing objects floating in the air.)
However, remember that we actually have a relatively detailed occlusion map available, not just one pixel. If, when we render, we also draw a picture of the sky (including the sun, clouds, sky color blend, and anything else going on up there) from a camera sitting on the ground and with the same wide field of view, multiply the occlusion map at each point by that sky map, and filter that down to a single pixel per point… suddenly we know the full illumination of each point on the map. Realtime landscape shadows fall out of that literally for free, as would cloud shadows, changing environmental lighting, and a whole lot of other things.
Now, in order to really see all this working, I’d need to make it so the vertex shader can examine the whole occlusion map for each point — not just one pixel — and occlude the sky map with it. The latest bleeding-edge shader models have a facility for launching subsidiary renders while executing a shader, but I doubt my computer has that support or that it would be efficient enough to use so intensively anyway. I think the vertex shader is just going to have to do a long list of unrolled multiplications of the two maps. With the 8x8 resolution that I’m currently using, that’s 64 multiplies and adds. Feels kind of excessive to me, but by today’s standards that’s not really a big deal. And since it’s in the vertex shader, it just runs once per visible point in the scene instead of once per pixel as a pixel shader-based system would do.
The other other tweak that occurred to me is that I could render the original occlusion map with the actual color values of the landscape. Filtering that down to a single point would tell me the color of light reflected from the surrounding obstructions, and could thus be used for more accurate ambient lighting. I’m not sure how noticeable that would actually be, but it wouldn’t be hard to experiment with and find out.