Archive

Archive for the ‘Osiris’ Category

Generating (a lot of) Data

February 15, 2011 3 comments

In my previous post I introduced the Osiris project that I’ve started working on and outlined the basic construction system for the planet I’m trying to procedurally build.  With that set up, the next step was to create a system for generating and storing the multitude of data that is required to represent an entire planet.

Now planets are pretty large things and the diversity and quantity of data that is required to represent one at even fairly low fidelity gets very large very quickly.  A requirement of this system though is that I want to be able to run my demo and immediately fly down near to the surface to see what’s there – I don’t want to have to sit waiting for minutes or hours while it churns away in the background building everything up.

Atmospheric scattering shader and starfield skybox from orbit

Atmospheric scattering shader and starfield skybox from orbit

For this to work I obviously need some form of asynchronous data generation system that can run in the background spitting out bits and pieces of data as quickly as possible while the main foreground thread is dealing with the user interface, camera movement and most importantly rendering the view.

This fits quite well with modern CPUs where the number of logical cores and hardware threads is continuing to rise providing increasing scope for such background operations, but that does also mean that the data generation system needs to be able to run on an arbitrary number of threads rather than just a single background one.  An added bonus of such scalability is that time can even be stolen from the primary rendering thread when not much else is going on – for example when the view is stationary or when the application is minimised.

Another view of the atmospheric scattering shader and starfield skybox

Another view of the atmospheric scattering shader and starfield skybox

Ultimately this work should be able to be farmed off to secondary PCs in some form of distributed computing system or even out into the cloud – but to support that data generation has to be completely decoupled from the rendering and able to operate in isolation.  Even if such distribution never happens though designing in such separation and isolation is still a valuable architectural design goal.

So I need to be able to generate data in the background, but to achieve my interactive experience I also need it to be generating the correct data in the background, which in this case means that at any given moment I want it to be generating data for the most significant features that are closest to the viewpoint.  This determination of what to generate also needs to be highly dynamic as the viewpoint can move around very quickly – thousands or even tens of thousands of miles per hour at times – so it’s no good queuing up thousands of jobs, the current set of what’s required needs to be generated and maintained on the fly.

Specular reflection on the water and lens flare visible

Specular reflection on the water and lens flare visible

Finally as generation of data can be a non-trivial process the system needs to be able to cache data it’s already generated on disk for rapid reloading on subsequent runs or even for later on in the same run if the in-memory data had to be flushed to keep the total memory footprint down.  I can’t simply cache everything however as for an entire planet the amount of data for the level of fidelity I want to reproduce can easily run into terabytes so it’s important to only cache up to a realistic point – say a few gigabytes worth – with the rest being always generated on demand.

To maximise the effectiveness of disk caching I also want to include compression in the caching system – the computation overhead of a standard compression library such as zlib shouldn’t be exorbitantly expensive compared to the potentially gigabytes of saved disk space.

This is quite a shopping list of requirements of course, which brings home the unavoidable complexity of generating high fidelity data on a planetary scale, but even non-optimal solutions to these primary requirements should allow me to build on top of such a generic data generation system and start to look at the planetary infrastructure generation and simulation work that I am primarily interested in.

Another view of the atmospheric scattering shader and starfield skybox

Another view of the atmospheric scattering shader and starfield skybox

Unfortunately of course data generation architecture lends itself only so well to pretty pictures so rather than some dull boxes and lines representation of data flow the images with this post show the atmospheric scattering shader that I’ve also recently added – it’s probably the single biggest improvement in both visual impact and fidelity and suddenly makes the terrain look like a planet rather than just a textured ball – more on this atmospheric shader in a later post.

A Note on Co-ordinate Systems

November 9, 2010 4 comments

Before delving deeper into terrain construction I thought a brief note on co-ordinate systems would be worthwhile. Stellar bodies come in all shapes and sizes but as I live on planet Earth like most people basing my virtual planet upon this well known baseline makes a lot of sense.

Now the Earth isn’t quite a perfect sphere so it’s radius varies but it’s volumetric mean radius is about 6371 Km so that’s a good enough figure to go on. Most computer graphics use single precision floating point numbers for their calculations as they are generally a bit faster for the computer to work with and are universally supported by recent graphics cards, but with only 23 bits for the significand they have only seven significant digits which can be a real problem when representing data on a planetary scale.

Simply using a single precision floating point vector to represent the position of something on the surface of our Earth sized planet for example would give an effective resolution of just a couple of meters, possibly good enough for a building but hardly useful for anything smaller and definitely insufficient for moving around at speeds lower than hundreds of kilometers per hour. Trying to naively use floats for the whole pipeline and we quickly see our world jiggling and snapping around in a truly horrible manner as we navigate.

Moving to the use of double precision floating point numbers is an obvious and easy solution to this problem however as with their 52 bit significand they can easily represent positions down to millionths of a millimetre at planetary scale which is more than enough for our needs. With modern CPUs their use is no longer as prohibitively expensive as they used to be in performance terms either with some rudimentary timings showing only a 10%-15% drop in speed when switching core routines from single to double precision. Also the large amounts of RAM available now make the increased memory requirement of doubles easily justified, the problem of modern GPUs using single precision however remains as somehow we have to pipe our double resolution data from the CPU through the single precision pipe to the GPU for rendering.

My solution for this is simply to have the single precision part of the process, namely the rendering, take place in a co-ordinate space centred upon the viewpoint rather than the centre of the planet. This ensures that the available resolution is being used as effectively as possible as when the precision falls off on distance objects these are by nature very small on screen where the numerical resolution issues won’t be visible.

To make this relocation of origin possible, I store with each tile it’s centre point in world space as a double precision vector then store the vertex positions for the tile’s geometry components as single precision floats relative to this centre. Before each tile is rendered, the vector from the viewpoint to the centre of the tile in double precision space is calculated and used to generate the single precision complete tile->world->view->projection space matrix used for rendering.

In this way the single precision vertices are only ever transformed to be in the correct location relative to the viewpoint (essentially the origin for rendering) to ensure maximum precision. The end result is that I can fly from millions of miles out in space down to being inches from the surface of the terrain without any numerical precision problems.

The planet from space...

...from orbit...

...and from just a few feet off the surface

There are of course other ways to achieve this, using nested co-ordinate spaces for example but selective use of doubles on the CPU is both simple and relatively efficient in performance and memory costs so feels like the best way.

Of course numerical precision issues are not limited to the positions of vertices, another example is easily found with the texture co-ordinates used to look up into the terrain textures. On the one hand I want a nice continuous texture space across the planet to avoid seams in the texture tiling but as the texture co-ordinates are only single precision floats there simply isn’t the precision for this. Instead different texture scales have to be used at different terrain LOD levels and the results somehow blended together to avoid seams – I’m not going to go to deeply in to my solution for that here as I think it’s probably worth a post of it’s own in due course.

Osiris, the Introduction

November 5, 2010 Leave a comment

Continuing the theme of procedurally generated planets, I’ve started a new project I’ve called Osiris (the Egyptian god usually identified as the god of the Afterlife, the underworld and the dead) which is a new experiment into seeing how far I can get having code create a living breathing world.

Although my previous projects Isis and Geo were both also in this vein, I felt that they each had such significant limitations in their underlying technology that it was better to start again with something fresh.  The biggest difference between Geo and Osiris is that where the former used a completely arbitrary voxel based mesh structure for its terrain Osiris uses a more conventional essentially 2D tile based structure.  I decided to do this as I was never able to achieve a satisfactory transition effect between the level of details in the voxel mesh leaving ugly artifacts and, worse, visible cracks between mesh sections – both of which made the terrain look essentially broken.

After spending so much time on the underlying terrain mesh systems in Isis and Geo I also wanted to implement something a little more straightforward so I could turn my attention more quickly to the procedural creation of a planetary scale infrastructure – cities, roads, railways and the like along with more interesting landscape features such as rivers or icebergs.  This is an area that really interests me and is an immediately more appealing area for experimentation as it’s not an area I have attempted previously.  Although a 2D tile mesh grid system is pretty basic in the terrain representation league table, there is still a degree of complexity to representing an entire planet using any technique so even that familiar ground should remain interesting.

The first version shown here is the basic planetary sphere rendered using mesh tiles of various LOD levels.  I’ve chosen to represent the planet essentially as a distorted cube with each face represented by a 32×32 single tile at the lowest LOD level.  While the image below on the left may be suitable as a base for Borg-world, I think the one on the right is the basis I want to persue…


While mapping a cube onto a sphere produces noticeable distortion as you approach the corners of each face, by generating terrain texturing and height co-ordinates from the sphere’s surface rather than the cube’s I hope to minimise how visible this distortion is and it feels like having what is essentially a regular 2D grid to build upon will make many of the interesting challenges to come more manageable.  The generation and storage of data in particular becomes simpler when the surface of the planet can be broken up into square patches each of which provides a natural container for the data required to simulate and render that area.

At this lowest level of detail (LOD) each face of the planetary cube is represented by a single 32×32 polygon patch.  At this resolution each patch covers about 10,000 km of an Earth sized planet’s equator with each polygon within it covering about 313 km.  While that’s acceptable when viewing the planet from a reasonable distance in space as you get closer the polygon edges start to get pretty damn obvious so of course the patches have to be subdivided into higher detail representations.

I’ve chosen to do this in pretty much the simplest way possible to keep the code simple and make a nice robust association between sections of the planet’s surface and the data required to render them.  As the view nears a patch it gets recursively divided into four smaller patches each of which is 32×32 polygons in it’s own right effectively halving the size of each polygon in world space and quadrupling the polygonal density.

Here you can see four stages of the subdivision illustrated – normally of course this would be happening as the view descended towards the planet but I’ve kept the view artificially high here to illustrate the change in the geometry.  With such a basic system there is obviously a noticeably visible ‘pop’ when a tile is split into it’s four children – this could be improved by geo-morphing the vertices on the child tile from their equivalent positions on the parent tile to their actual child ones but as the texturing information is stored on the vertices there is going to be a pop as the higher frequency texturing information appears anyway .  Another option might be to render both tiles during a transition and alpha-blend between them, a system I used in the Geo project with mixed results.

LOD transitions are a classic problem in landscape systems but I don’t really want to get bogged down in that at the moment so I’m prepared to live with this popping and look at other areas.  It’s a good solid start anyway though I think and with some basic camera controls set up to let me fly down to and around my planet I reckon I’m pretty well set up for future developments.