## A Note on Co-ordinate Systems

Before delving deeper into terrain construction I thought a brief note on co-ordinate systems would be worthwhile. Stellar bodies come in all shapes and sizes but as I live on planet Earth like most people basing my virtual planet upon this well known baseline makes a lot of sense.

Now the Earth isn’t quite a perfect sphere so it’s radius varies but it’s volumetric mean radius is about 6371 Km so that’s a good enough figure to go on. Most computer graphics use single precision floating point numbers for their calculations as they are generally a bit faster for the computer to work with and are universally supported by recent graphics cards, but with only 23 bits for the significand they have only seven significant digits which can be a real problem when representing data on a planetary scale.

Simply using a single precision floating point vector to represent the position of something on the surface of our Earth sized planet for example would give an effective resolution of just a couple of meters, possibly good enough for a building but hardly useful for anything smaller and definitely insufficient for moving around at speeds lower than hundreds of kilometers per hour. Trying to naively use floats for the whole pipeline and we quickly see our world jiggling and snapping around in a truly horrible manner as we navigate.

Moving to the use of double precision floating point numbers is an obvious and easy solution to this problem however as with their 52 bit significand they can easily represent positions down to millionths of a millimetre at planetary scale which is more than enough for our needs. With modern CPUs their use is no longer as prohibitively expensive as they used to be in performance terms either with some rudimentary timings showing only a 10%-15% drop in speed when switching core routines from single to double precision. Also the large amounts of RAM available now make the increased memory requirement of doubles easily justified, the problem of modern GPUs using single precision however remains as somehow we have to pipe our double resolution data from the CPU through the single precision pipe to the GPU for rendering.

My solution for this is simply to have the single precision part of the process, namely the rendering, take place in a co-ordinate space centred upon the viewpoint rather than the centre of the planet. This ensures that the available resolution is being used as effectively as possible as when the precision falls off on distance objects these are by nature very small on screen where the numerical resolution issues won’t be visible.

To make this relocation of origin possible, I store with each tile it’s centre point in world space as a double precision vector then store the vertex positions for the tile’s geometry components as single precision floats relative to this centre. Before each tile is rendered, the vector from the viewpoint to the centre of the tile in double precision space is calculated and used to generate the single precision complete tile->world->view->projection space matrix used for rendering.

In this way the single precision vertices are only ever transformed to be in the correct location relative to the viewpoint (essentially the origin for rendering) to ensure maximum precision. The end result is that I can fly from millions of miles out in space down to being inches from the surface of the terrain without any numerical precision problems.

There are of course other ways to achieve this, using nested co-ordinate spaces for example but selective use of doubles on the CPU is both simple and relatively efficient in performance and memory costs so feels like the best way.

Of course numerical precision issues are not limited to the positions of vertices, another example is easily found with the texture co-ordinates used to look up into the terrain textures. On the one hand I want a nice continuous texture space across the planet to avoid seams in the texture tiling but as the texture co-ordinates are only single precision floats there simply isn’t the precision for this. Instead different texture scales have to be used at different terrain LOD levels and the results somehow blended together to avoid seams – I’m not going to go to deeply in to my solution for that here as I think it’s probably worth a post of it’s own in due course.

Hi, first of all great post! I’m working on my own procedural planet engine and I’ve come across the double precision problem. There’s one part of your post that I don’t quite understand, which is this:

“store the vertex positions for the tile’s geometry components as single precision floats relative to this centre.”

So let’s say the center of a node that is at the depth ~15 or so is around 0.00013666818267665803. The width between each vertex is 0.00006103515625 so I would lose precision very quickly. All the zeros in the front are wasted space. How can I utilize them? Does that mean each patch has the same size but it’s just translated and scaled to fit on the planet?

Thanks!

Thanks for the comment, good luck with your planet engine – let everyone know if you blog about it! The thing with floating point numbers is it’s the number of significant digits that’s important, in your example above 0.00006103515625 isn’t wasting space with the zeros as all the important digits come after them.

If on the other hand it had been 123.00006103515625 you would be in trouble as there are important digits at both ends of the number so the zeros are wasting precision. Take a look at IEEE single precision floating point on Wikipedia or elsewhere on the net for more info.

BTW, I’m not posting to this blog any more but now post here: http://johnwhigham.blogspot.co.uk/

Ahh, sweet thanks! I’ve managed to fix my precision problem. However, I’ve come across new precision problem with normal maps that I’ve described here:

http://www.gamedev.net/topic/660629-procedural-planet-gpu-normal-map-artifacts/

At depth ~15 I also get weird noise/artifacts on my normal map, or actually when I sample it. Did you manage to find some good solution to that? 🙂 (Posting here in hope for a quick reply back!) Thanks so much!

Could the problem be running out of precision on the GPU interpolators for texture co-ordinates? I’ve certainly had problems before when (u,v) co-ordinates converge too much and you are sampling tiny fractions of the texture. There is a very real limit to what the GPU can physically accomplish and we have to work within those restrictions.

One strategy that can be useful is to re-scale the texture co-ordinates to keep the texels approximately constant size on screen but then you have to blend between octaves to hide the joins…of course that may not apply to your particular scenario.