Tuesday, September 16, 2014
"Hey why does the cool fractal planet look all blurry and low-res now?" you might be irately asking. The answer is, this is not the fractal planet. This is a reconstruction of it with vertex colors on a low-res globe. It’s proof of concept for a surface exploration mode, where surface geometry is generated based on the landscape information that was drawn out of the surface shader and stored in the surface map.
The next step is to be closer to the surface than geosynchronous orbit, of course. But first I wanted to confirm that the planet was being reconstructed correctly and actually matched the map.

"Hey why does the cool fractal planet look all blurry and low-res now?" you might be irately asking. The answer is, this is not the fractal planet. This is a reconstruction of it with vertex colors on a low-res globe. It’s proof of concept for a surface exploration mode, where surface geometry is generated based on the landscape information that was drawn out of the surface shader and stored in the surface map.

The next step is to be closer to the surface than geosynchronous orbit, of course. But first I wanted to confirm that the planet was being reconstructed correctly and actually matched the map.

Sunday, September 7, 2014
Spent a bit of time this afternoon cleaning up the planetary climate generation, as it was an overgrown and hacky mess. 
To recap, the system works by generating a climate texture, as seen on the left side of the screen. The top of the texture represents the poles (latitude 90 degrees) and the bottom the equator (latitude 0 degrees.) The lowest altitude is on the left side, and the highest altitude is on the right. When the shader is drawing the planet, it computes the altitude of the pixel it’s currently working on and then uses that and the latitude to look up a climate pixel in the texture.
This is nice and flexible, and can create a huge variety of results just by tweaking the climate texture, which is what I was focusing on improving. Besides the plus of not being an unreadable jumble, climate generation now also has more explicit parameters than the old setup. The system formerly relied on emergent effects and it doesn’t do that quite as much now, but this also means I can adjust those parameters and be more confident of getting attractive results.

Spent a bit of time this afternoon cleaning up the planetary climate generation, as it was an overgrown and hacky mess. 

To recap, the system works by generating a climate texture, as seen on the left side of the screen. The top of the texture represents the poles (latitude 90 degrees) and the bottom the equator (latitude 0 degrees.) The lowest altitude is on the left side, and the highest altitude is on the right. When the shader is drawing the planet, it computes the altitude of the pixel it’s currently working on and then uses that and the latitude to look up a climate pixel in the texture.

This is nice and flexible, and can create a huge variety of results just by tweaking the climate texture, which is what I was focusing on improving. Besides the plus of not being an unreadable jumble, climate generation now also has more explicit parameters than the old setup. The system formerly relied on emergent effects and it doesn’t do that quite as much now, but this also means I can adjust those parameters and be more confident of getting attractive results.

Thursday, September 4, 2014
Cleaning up long-ago code that generated a 2D map of the 3D planet. I was doing it by flattening the actual sphere triangles onto the plane, but now that the planet is entirely generated by shader code there’s really no need for that — I can just draw a single rectangle which interprets its X and Y coordinates as latitude and longitude, then in the pixel shader converts those to the corresponding 3D point and runs the shader generation function to find what color it should be. The result is a perfect map. (Though not an interesting projection, map nerds; it’s still just a plate carrée.) The two oceans visible on the planet can be seen in the center of the map.
Now that’s all fine, but the genuinely interesting bit is we now have a data resource that tells us what sort of terrain a specific point on the planet’s surface might be — information that used to be locked away in the final rendered scene and inaccessible. That could be used for all kinds of irresponsible things.
Also it turns out that the reason my planets didn’t have ice caps was that there wasn’t enough temperature variation between the poles and the equator. I kind of feel that there’s more variation I could wedge into the climate texture, to get distinct desert zones in hotter regions. That’s worth investigating, as the homogenous terrain of these planets can get a little dull.

Cleaning up long-ago code that generated a 2D map of the 3D planet. I was doing it by flattening the actual sphere triangles onto the plane, but now that the planet is entirely generated by shader code there’s really no need for that — I can just draw a single rectangle which interprets its X and Y coordinates as latitude and longitude, then in the pixel shader converts those to the corresponding 3D point and runs the shader generation function to find what color it should be. The result is a perfect map. (Though not an interesting projection, map nerds; it’s still just a plate carrée.) The two oceans visible on the planet can be seen in the center of the map.

Now that’s all fine, but the genuinely interesting bit is we now have a data resource that tells us what sort of terrain a specific point on the planet’s surface might be — information that used to be locked away in the final rendered scene and inaccessible. That could be used for all kinds of irresponsible things.

Also it turns out that the reason my planets didn’t have ice caps was that there wasn’t enough temperature variation between the poles and the equator. I kind of feel that there’s more variation I could wedge into the climate texture, to get distinct desert zones in hotter regions. That’s worth investigating, as the homogenous terrain of these planets can get a little dull.

Sunday, June 8, 2014
Just a bunch of waypoint-related gruntwork. WPs can detect if they are occluded (only by the nearest body) to lessen the chances of careless pilots crashing into planets trying to reach waypoints or moons on the other side. They also now have visibility ranges to keep them from overlapping their parent body’s waypoint at large distances.
The names of the sun, planets, and moon in this test environment (“Alsace,” “Seraphim,” “Clementine,” “Belfunk”) are taken from a 3D landscape game I was working on years and years ago, way back before I got into professional game development. Seeing them again creates feelings of strange nostalgia in me.

Just a bunch of waypoint-related gruntwork. WPs can detect if they are occluded (only by the nearest body) to lessen the chances of careless pilots crashing into planets trying to reach waypoints or moons on the other side. They also now have visibility ranges to keep them from overlapping their parent body’s waypoint at large distances.

The names of the sun, planets, and moon in this test environment (“Alsace,” “Seraphim,” “Clementine,” “Belfunk”) are taken from a 3D landscape game I was working on years and years ago, way back before I got into professional game development. Seeing them again creates feelings of strange nostalgia in me.

Saturday, June 7, 2014
I’ve been spending the last few days reworking and doing more implementation on waypoints and displayed waypoint icons in the ol’ space game. This has involved adding multiple waypoint types for locations and objects, starting to plumb through the multiple world tech, and rewriting the icons to use flat 2D rendering instead of the projective cockpit stuff from the earlier iteration, as seen in the screenshot. This last part ended up involving some slightly annoying math.
See, I wanted the waypoint markers for planets to be a little line touching the top of the planet, as in the screenshot. I recently fixed up some longstanding hassles with projecting points between the 2D and 3D worlds in my engine, so it’s now really easy to take any point in the universe and project it onto the screen. You’d think all that was necessary was to start with the center of the planet, adjust it up along the camera’s vertical axis by the radius of the planet, project that to the 2D world and we’re good to go… right?
In the words of Wreck-it Ralph, HA. And also, no.
The problem is with the very nature of perspective projection. A planet is a 3D object with depth. The front of it bulges towards the camera, with the result that the point we just located is going to be concealed behind that bulge. As we get closer to the planet, the front of it bulges ever closer, and the point we projected gets ever more wrong. (The size of the bulge effect is completely unaffected by the camera’s field of view, by the way. Which makes sense: changing the FOV doesn’t give us any ability to see “around” the bulge any more than adjusting the zoom lens on a camera lets you see the back side of an object.)
How to address this? I ended up deriving a solution myself (after a few false starts) and feeling very smug about it, although you can also find a good description of the problem and a similar solution in this paper. Essentially: We want to locate the point in 3D space which is the highest possible point of the sphere on the vertical axis we can see. Imagine we draw a line from that point to the camera. The angle between that line, and a line connecting the point to the center of the planet, is going to be a right angle. Knowing this, with simple trigonometry we can derive the length of that line and the angle between the line and one connecting the camera and the planet’s center. Then, we take the camera-to-planet vector, rotate it around the horizontal axis by that angle and change its length to the value we derived, and we’ve found our point!
There is a little complication, though, in that this would only work if the planet was perfectly aligned in front of the camera. As the planet moves to the left or right, the lines from the camera to the center and edge will stop being only on the vertical axis. I resolved this by finding the point where the camera would be if you slid it horizontally to line up with the planet and rotating around that, though I’m fairly sure there’s a simpler way to address that issue. Probably involves changing coordinate spaces, because what graphics technique doesn’t involve changing coordinate spaces, am I right? Huh? Guys, back me up on this one!
On an unrelated note, I like how all the fast travel screenshots in this game look pretty badass.

I’ve been spending the last few days reworking and doing more implementation on waypoints and displayed waypoint icons in the ol’ space game. This has involved adding multiple waypoint types for locations and objects, starting to plumb through the multiple world tech, and rewriting the icons to use flat 2D rendering instead of the projective cockpit stuff from the earlier iteration, as seen in the screenshot. This last part ended up involving some slightly annoying math.

See, I wanted the waypoint markers for planets to be a little line touching the top of the planet, as in the screenshot. I recently fixed up some longstanding hassles with projecting points between the 2D and 3D worlds in my engine, so it’s now really easy to take any point in the universe and project it onto the screen. You’d think all that was necessary was to start with the center of the planet, adjust it up along the camera’s vertical axis by the radius of the planet, project that to the 2D world and we’re good to go… right?

In the words of Wreck-it Ralph, HA. And also, no.

The problem is with the very nature of perspective projection. A planet is a 3D object with depth. The front of it bulges towards the camera, with the result that the point we just located is going to be concealed behind that bulge. As we get closer to the planet, the front of it bulges ever closer, and the point we projected gets ever more wrong. (The size of the bulge effect is completely unaffected by the camera’s field of view, by the way. Which makes sense: changing the FOV doesn’t give us any ability to see “around” the bulge any more than adjusting the zoom lens on a camera lets you see the back side of an object.)

How to address this? I ended up deriving a solution myself (after a few false starts) and feeling very smug about it, although you can also find a good description of the problem and a similar solution in this paper. Essentially: We want to locate the point in 3D space which is the highest possible point of the sphere on the vertical axis we can see. Imagine we draw a line from that point to the camera. The angle between that line, and a line connecting the point to the center of the planet, is going to be a right angle. Knowing this, with simple trigonometry we can derive the length of that line and the angle between the line and one connecting the camera and the planet’s center. Then, we take the camera-to-planet vector, rotate it around the horizontal axis by that angle and change its length to the value we derived, and we’ve found our point!

There is a little complication, though, in that this would only work if the planet was perfectly aligned in front of the camera. As the planet moves to the left or right, the lines from the camera to the center and edge will stop being only on the vertical axis. I resolved this by finding the point where the camera would be if you slid it horizontally to line up with the planet and rotating around that, though I’m fairly sure there’s a simpler way to address that issue. Probably involves changing coordinate spaces, because what graphics technique doesn’t involve changing coordinate spaces, am I right? Huh? Guys, back me up on this one!

On an unrelated note, I like how all the fast travel screenshots in this game look pretty badass.

Sunday, June 1, 2014
Getting some code infrastructure rearranged so that the player can actually travel between points of interest in the star system and see different things at each one; not quite there yet, but it’s getting close. Here’s the thought process I followed:
Okay, so I’ve decided to use my existing “world” system (which simply collects a bunch of entities into a single group that can be ticked and rendered all at once) to hold all the entities at a POI. Then, I can delete them all at once when the player leaves.
It would be nice if you could leave a POI, come back, and see the same configuration of objects. That would require somehow being able to serialize entity state into a POI when you leave and then deserialize it into new entities upon return.
Of course, when entities are serialized they wouldn’t tick, so (say) a battle that was going on would freeze until you returned. Maybe some ridiculous, roundabout way of letting entities simulate what would happen if they hadn’t been serialized…
Wait. Why am I deleting them at all?
it’s 2014 and there is more computing power in the slightly aging iMac sitting in front of me than existed in the entire God damn world in 1985. (Which raises uncomfortable questions about the things I unthinkingly waste that power on, but anyway.) Why shouldn’t all these worlds just run simultaneously, and I only render the one the player is in? Now if I was planning to simulate wars between thousands of spaceships that would still cause trouble, but it’s unlikely there will be a whole lot going on in any other world besides a single combat encounter — you’re summoned across the system to defend a station that’s under attack from no more than a dozen enemies, say.
So, yeah. Each POI in your current system will have a world associated with it, they all tick forward normally, and as the player arrives at one we swap it in to the “render this world” slot. Easy-peasy. Probably.

Getting some code infrastructure rearranged so that the player can actually travel between points of interest in the star system and see different things at each one; not quite there yet, but it’s getting close. Here’s the thought process I followed:

  1. Okay, so I’ve decided to use my existing “world” system (which simply collects a bunch of entities into a single group that can be ticked and rendered all at once) to hold all the entities at a POI. Then, I can delete them all at once when the player leaves.
  2. It would be nice if you could leave a POI, come back, and see the same configuration of objects. That would require somehow being able to serialize entity state into a POI when you leave and then deserialize it into new entities upon return.
  3. Of course, when entities are serialized they wouldn’t tick, so (say) a battle that was going on would freeze until you returned. Maybe some ridiculous, roundabout way of letting entities simulate what would happen if they hadn’t been serialized…
  4. Wait. Why am I deleting them at all?

it’s 2014 and there is more computing power in the slightly aging iMac sitting in front of me than existed in the entire God damn world in 1985. (Which raises uncomfortable questions about the things I unthinkingly waste that power on, but anyway.) Why shouldn’t all these worlds just run simultaneously, and I only render the one the player is in? Now if I was planning to simulate wars between thousands of spaceships that would still cause trouble, but it’s unlikely there will be a whole lot going on in any other world besides a single combat encounter — you’re summoned across the system to defend a station that’s under attack from no more than a dozen enemies, say.

So, yeah. Each POI in your current system will have a world associated with it, they all tick forward normally, and as the player arrives at one we swap it in to the “render this world” slot. Easy-peasy. Probably.

Monday, May 26, 2014
That’s… that’s… not bad actually for an airless body. In fact, it’s kind of perfect. I don’t think I even need to add any hero craters.
This guy doesn’t even have as complex of a shader as the Earthlike planet — it’s just two layers of noise, one large and one detail, added together. That’s appropriate as the Earthlike shader tries to simulate large, solid bodies of one sort or the other to create the impression of well-separated continents and oceans, while an airless body generally won’t have such constraints.
The only item worth pointing out is that this shader/color combination is great at generating objects that look like our Moon, but our Moon is unusual in that it has its large “seas.” Most other airless rocks are just a consistent wasteland of craters, so I might want to make more variations of this so as to have sea-less moons. As a side note, I thought that the seas were caused by the Earth’s gravity pulling on the side of the Moon facing the Earth and causing the volcanic eruptions which filled the seas, but the Wikipedia article suggests that’s unfounded. In fact, nobody knows why the seas mostly face Earth, or if that might just be a completely meaningless coincidence.
Anyway, adding different planet types is a diversion. Next thing I should do is make local “points of interest” work. When you approach a POI under fast travel, you should automatically drop into normal space and whatever local objects are present to interact with in that POI should get spawned. Conversely, once you leave a POI under fast travel all the local objects should get despawned again. This would allow the player to actually visit different regions of the solar system and have gameplay there, which I hear is important for a game.

That’s… that’s… not bad actually for an airless body. In fact, it’s kind of perfect. I don’t think I even need to add any hero craters.

This guy doesn’t even have as complex of a shader as the Earthlike planet — it’s just two layers of noise, one large and one detail, added together. That’s appropriate as the Earthlike shader tries to simulate large, solid bodies of one sort or the other to create the impression of well-separated continents and oceans, while an airless body generally won’t have such constraints.

The only item worth pointing out is that this shader/color combination is great at generating objects that look like our Moon, but our Moon is unusual in that it has its large “seas.” Most other airless rocks are just a consistent wasteland of craters, so I might want to make more variations of this so as to have sea-less moons. As a side note, I thought that the seas were caused by the Earth’s gravity pulling on the side of the Moon facing the Earth and causing the volcanic eruptions which filled the seas, but the Wikipedia article suggests that’s unfounded. In fact, nobody knows why the seas mostly face Earth, or if that might just be a completely meaningless coincidence.

Anyway, adding different planet types is a diversion. Next thing I should do is make local “points of interest” work. When you approach a POI under fast travel, you should automatically drop into normal space and whatever local objects are present to interact with in that POI should get spawned. Conversely, once you leave a POI under fast travel all the local objects should get despawned again. This would allow the player to actually visit different regions of the solar system and have gameplay there, which I hear is important for a game.

Sunday, May 25, 2014
Working on some Venusian (cloud-covered) planets. The look is pretty much just the same cloud effects as on terrestrials, but without the ground or shadows and with cloud variation fading away towards the poles. I’m not totally sold yet — given the abstract, solid-color style, I have to find clear ways of making a cloud-covered planet look different from a simple airless one.

Working on some Venusian (cloud-covered) planets. The look is pretty much just the same cloud effects as on terrestrials, but without the ground or shadows and with cloud variation fading away towards the poles. I’m not totally sold yet — given the abstract, solid-color style, I have to find clear ways of making a cloud-covered planet look different from a simple airless one.

Thursday, May 1, 2014

I haven’t posted here in a bit, so I thought I’d write a little on what I’m trying to do with these terrestrial planets.

First up, I wanted to get away from the sphere-halving method I’ve always used to generate continents and other surface features on fractal planets. This is an excellent technique that produces fine results, but you need to cut the sphere in half hundreds if not thousands of times to get the desired level of detail, and that slows down generation of solar systems. 3D noise isn’t good enough by itself either, though, because it doesn’t break up into plausibly contiguous areas of land and water like real planets do — or our real planet, at least, does. To address this I rigged up some test shaders to try and generate the land entirely in the pixel shader by stacking up multiple layers of 3D noise, including a layer that represents the difference between two noise sets. (If you’re curious, the high-level algorithm is to create the difference layer, then multiply it by another noise layer, and finally multiply in a scaled-down fourth layer to add the fine detail.) The results aren’t bad, and it seems to be a bit faster at any rate.

Similarly, I wanted to have more realistic looking clouds than the weird, nasty blobs I had before with pure 3D noise. A very similar formula as with the land produced better results as seen in the first picture. It has the additional feature of increased horizontal scale of cloud features towards middle latitudes, which adds a bit of a fakey Coriolis effect to the cloud systems and makes them a bit more plausible-looking. I went through a lot of iterations of this but it just might be on the right track now. (As a side note, @DesignerBradley on Twitter gave me some super useful hints that set me on the right track to improve my clouds — thanks again!)

As for airless planets, tweaking out the noise parameters to make them rougher, and add more climate variation per latitude, produces more attractive results as in the second image. However, my climate code does produce odd results: I don’t think I’ve ever seen a generated verdant planet with an ice cap, while the airless ones always have huge polar ice caps like the ones seen here. This is pretty much the opposite of real life, where the only solar system bodies with meaningful ice caps are Earth and Mars, which both have atmospheres. I suspect that on an airless planet or moon that isn’t tidally locked, you’d be limited to whatever could survive on the sunlit side of the body: if ice would melt and evaporate on the sunlit side, then the sun will ensure that ice is melted and evaporated from the whole body as it rotates, and if not, not. Thus it would either be all rock, or (given that planets usually form from homogenous rings of material) all ice. Who knows if that’s true, as the Internet doesn’t care about anything icecap-related that isn’t an apocalyptic melting scenario which will righteously punish the human race for its technological sins, but it sounds plausible to me at any rate.

One side thing I’d like to do is have the densest-atmosphere planets turn to a solid color, perhaps with some minor cloud effects, as with Venus or Titan in our solar system. Pretty easy to implement that based on certain results from planet generation.

Third picture is unrelated to planet generation, it’s just an example of finally getting the lens flares working again. Frankly, I have no idea whatsoever whether the amount of hassle I ended up going through was appropriate for this task or not.

Saturday, March 29, 2014

Ground control to Major Tom. Come in, Major Tom.

finally buckled down and reworked the camera and ship control code in the space game, and I think I have something promising at long last. There were really two major broken things, the ship control itself and the chase camera. I pulled them apart into their own classes and addressed their problems separately, which shockingly turned out to be a cleaner way to deal with them.

At a fundamental level, ship motion is Ace Combat-style. You maintain a constant velocity forward but can hold the left or right analog trigger to temporarily decelerate (all the way to a stop, in my game’s case) or accelerate using afterburners. I’ve always found that monitoring my ship’s speed is a giant hassle I can never keep up with in space sims, and the AC model means it’s simply not a problem. You know your speed at all times because you are personally holding the analog triggers that set it, just like you know whether your car is stopped or accelerating based on the pedals you’re pushing down with your feet. As far as velocity and inertia goes, right now you simply lerp to your new velocity at a constant rate. This is utterly nonsensical from a physics point of view but it sure was simple to program and it’s not hard to understand what’s going on, an important factor when you’re in interplanetary space without a lot of landmarks around.

The other nice thing about this is that all the ship behavior is based on a set of constants. Reducing the steering lerp rate, say, makes the difference between a nimble fighter and a heavy barge. This’ll make it easy to support different “flight models” for different ships, or even the same ship equipping different types of equipment.

As for the chase camera, it’s a simple affair that monitors its target and lerps (sensing a pattern here?) to the desired chase position and angle. This works nicely in that by adjusting the lerp rate I can tweak to what extent you see the ship pivoting and reeling about on the screen, and the more violently you maneuver the more the camera lags to give you a sense that you’re making tough maneuvers. I might make the lerp rate dependent on your speed, so the ship feels a little out of control at top speed. It would be easy to explore, at any rate.

Lack of decent, non-hacked in camera and ship control was a huge blocker for the game, so maybe I’ll feel better about exploring different areas of gameplay now.