SVOs and back-face culling

Consider a green forest canopy over a brown dirt forest floor.

The voxelisation process will convert it to a sheet of brown voxels, with a sheet of green voxels over the top. As it calculates the down-sampled versions for the higher nodes in the octree, it will eventually merge the two sheets together, producing a single green-brown sheet.

svoblogpic1

This is the “Thin Wall” problem, and the issue is that a single voxel is trying to represent content that looks different when viewed from different directions – in this case above and below. In practice it means that the canopy will appear green when viewed from above, until the camera reaches a certain distance, at which point it will suddenly turn green-brown.

Cube faces

My plan is to change the voxelisation process, so that it calculates a colour for each cube face, representing the the content as viewed down the corresponding axis. This is still an approximation because it doesn’t cover all the possible angles a voxel can be viewed from, but should be an improvement. The cube face data will be an intermediate data structure, and will facilitate detection of “Thin Wall” while processing voxel data. The final output will still be a regular 1-colour-per-voxel octree.

svoblogpic2

In the top right box the top cube face is green, and the bottom is brown. We’ve detected “Thin Wall”. But don’t have a strategy to deal with it yet.

Hidden faces

We can remove some ambiguity by removing hidden cube faces from consideration.

Firstly, when the polygon content is converted into voxels, we can detect when looking down an axis presents a view of the back-face of a polygon. In this case we know that the voxel should never actually be viewed from this direction – presumably it will be occluded by other voxels.

Secondly, if two voxels are hard up against each other, the touching faces will never be visible, and can both be excluded.

svoblogpic3

 

(Right => Down sample)
(Down => Remove hidden cube faces. Or convert to final SVO.)

Removing hidden cube faces produces a single sided sheet of brown voxels, visible from above but not below, with a double sided sheet of green voxels above it.

As we progressively down-sample and remove hidden faces, the brown sheet and the down-facing side of the green sheet eventually press together and cancel each other out. The result is a single sided green sheet visible from above only, which can be represented (unambiguously) by a green voxel.

Essentially the “Thin Wall” problem has disappeared.

This makes sense. The problem was that the voxel was trying to represent content that is green when viewed from above and brown when viewed from below. By recognising the camera can never view the voxel from below, we’ve removed that case and the ambiguity is gone. The down-sampled octree node can be coloured green and will look correct when viewed from above.

Observations

Thin Wall general case

It should be noted that this only solves a single specific case of “Thin Wall”, which is really not “Thin Wall” at all once you take into account the valid camera angles. It does not solve the “Thin Wall” general case.

Final octree

The intermediate data structure recognises when content can only be viewed from specific angles. This does not result in a magical octree where voxels will only appear when viewed from valid camera positions. It’s still just a plain old one-colour-per-voxel octree, we’re just calculating the most correct colours for the valid set of camera positions.

In this case if we were to move the camera underground, the ground would appear brown, until the camera was a certain distance away, then switch to green. This is acceptable behaviour. We’re viewing the content from an invalid camera position, as indicated by the one-sided ground surface in the original polygon mesh. If this was (somehow) a valid camera position, the ground surface should be made double-sided to indicate this. The voxel processing would then attempt to produce SVO nodes that look correct from above and below. (Of course then it would run into the thin-wall issue again.)

Posted in Development blog, Sparse Voxel Octree, Thin Wall | Comments Off on SVOs and back-face culling

Colour compression results

Here’s the result of the modified DXT1 colour compression experiment:

ColourCompressionComparison

The left side is standard DXT1 texture compression. The right side is the modified version, which involves shortening the reference colours to 15-bits and using the 2 spare bits as an “exponent”.

The colours in the dark shadowed regions look a lot better. Here’s the zoomed in version. Notice the blotchy dark green patches in the left hand image:

ColourCompressionZoomed

 

Considering this has the same compression ratio as vanilla DXT1, I think it’s an improvement, and I’ve yet to notice any noticeable difference from dropping from 16-bit to 15-bit reference colours.

It works well for my data set because I’m using baked in lighting and shadows, so there are a lot of dark areas which benefit from the extra colour depth the modified algorithm proves at low intensities. If the lighting were dynamic, the base textures would have a more consistent brightness and there may not be much visible benefit.

 

 

 

Posted in Development blog, Progress, Texture compression, Uncategorized | Comments Off on Colour compression results

Colour compression

What happens when you change your colour compression logic without updating the decompression code…Hybrid 2015-06-24 21-50-39-44

Posted in Development blog, Miscellaneous, Progress | Comments Off on Colour compression

Floating point DXT1 texture compression (sort of)

DXT1 texture compression

Take a look at this:

ColourCompressionThe colours in the shadowed parts look horrible.

This is because the colour data is compressed with DXT1 texture compression, which stores a block of 16 texels in 64 bits as two 16-bit reference colours and another two bits for each texel to specify its interpolation between the two colours. It’s a good format for voxel colours when you need to fit a few hundred million of them into GPU memory, but it suffers from the limitations of 16 bit colour. There’s not enough colour depth to do dark areas well.

DXT1 “floating point” derivative

So I got to thinking: what if the colour components were floating point numbers instead?

With just 5 (or 6) bits per colour component, that’s not really feasible, but we could store one exponent and share it between all the colour components. This would give us extra colour depth when all components (of both the reference colours) are close to 0. By switching to 15-bit reference colours (instead of 16-bit) we free up two bits for the exponent, meaning it can range from 0-3.

The 15-bit colours can be unpacked into 24-bit colours as follows:

  1. Copy the 5 bits of each component into the high bits of each corresponding byte
  2. Right shifting by the exponent.
Red      Green    Blue     Exponent=3
10101    01010    11100    
  |        |        |      Unpack
  v        v        v
10101000 01010000 11100000

==>      ==>      ==>      Right shift by exponent
00010101 00001010 00011100

In the dark areas, when the components of the reference colours are all in range 0-31, this gives a full 5 bits of colour depth, which should look as good as a 24-bit colour depth image.

Limitations

Of course this only helps if all colour components are in the low intensity range. If some are high and some are low then the exponent must be set low so that the high intensity colours remain high. But in the worst case scenario we’re still almost no worse off than with traditional DXT1 compression. The only difference is that we’re using 15-bit reference colours (5 bits for the green component instead of 6). I expect that would result in a small quality loss overall, but would still be worth it for the improvements to the darker regions.

I think this is worth investigating.

Posted in Development blog, Texture compression | Comments Off on Floating point DXT1 texture compression (sort of)

Voxelising polygon models by rendering slices

Background

I’ve had some success with combining voxel ray-casting and traditional polygon rendering in the same engine. A sparse voxel octree works very well for medium to far (to very far) content due to the built-in general-purpose level of detail logic, while switching to polygon rendering for close up content prevents content from appearing blocky and means you can have a lot more detail for a lot less memory/disk space (polygon meshes are a lot more memory efficient).

The two rendering methods play nicely together too. The polygon content is always closer than the voxel content, so the engine can render the polygons first, and then ray-cast only the pixels that haven’t been covered by polygons.

I put this video together some months back to illustrate.

The problem

The voxel and polygon content have the same source – a textured height-map, and some static mesh models. It’s important that the voxel model look as similar as possible to how the rendered polygons will look, so as to avoid an obvious seam where the two types of rendering meet, and to prevent a jarring “popping” effect when content crosses the boundary and switches from voxel to polygon rendering.

My current voxelisation approach works fine for large continuous surfaces like the textured height-map. I’ve basically written a software renderer that projects each polygon orthogonally and rasterises it. The pixel coordinates become the voxel X and Y position, and the depth value gives the Z, and by projecting down each axis and merging the results together I end up with a nice, continuous voxel surfaces with no holes.

For leafy tree models though, not so great :-(.

Compare the palm trees (the ones indicated by the arrows) in these images:

On the left, the palm tree is a voxel model. On the right, the camera has been moved fractionally forward such that the palm tree is now rendered as a polygon model. The voxel model is noticeably fuller than the polygon version, and the popping effect is quite noticeable.

The engine switches from voxels to polygons just before the voxels appear larger than a single pixel. Ideally the differences at that point should be subtle, but clearly that’s not happening.

What went wrong?

So what’s going on here?

In my opinion, the main issues are:

1. Almost empty voxels are displayed as solid

If any part of a voxel is intersected by polygon content, the voxel is treated as solid. This is probably the biggest issue, and it causes the voxel model to bulk up a lot fuller than the corresponding polygon model.

The palm tree model in particular has a lot of thin, spiny leaves that don’t occupy much volume. But any voxels volumes that are intersected are treated as solid.

2. Voxelisation doesn’t consider all the content that intersects the voxel

If multiple polygons intersect the same voxel, the first one gets to set the colour. Or maybe the last one. Or maybe the one closest to the left. I really don’t know, but whichever one wins, there’s no guarantee its colour best represents the content inside the voxel. This can lead to ugly speckled artifacts, like the dark spots on the tree trunk.

3. Voxelisation doesn’t consider the voxel from different angles

Sometimes the content inside a voxel’s volume looks different when viewed from different angles. The classic example of this is the thin-wall problem, where a 1-voxel-thick wall has a different colour on each side. If a voxel can only be one colour, how can it accurately represent the content inside it?

Model slices

So the new approach I’ve come up with is to use OpenGL rendering to render the polygon model into an orthogonal colour buffer, and use two z-plane aligned clipping planes to slice the content to a specific depth range. The resulting 2D image will be the slice of the model at the specified depth. Repeating this process for different depth values will build up a full voxel representation of the model.

I expect this approach to have a number of advantages:

Because the same OpenGL rendering is used to create the voxel model, I can be confident the voxel model I generate should be a good match for the model when rendered with polygons at run-time. I don’t have to maintain a software renderer that closely matches OpenGL’s texturing and lighting logic, or be bound by its rendering limitations.

I can scale the output buffer so that a single voxel maps to (say) a 4×4 pixel region, and average the colours to get a better representation of the content inside the voxel. This will also give me a better representation of how full the voxel is. If less than 1/2 full, it may be better to leave the voxel empty (although there are some issues with determining when doing this will create a hole in a continuous surface).

The image shows what each voxel looks like when viewed down the Z axis. By flipping/rotating the model and repeating the process I can calculate what each voxel looks like when viewed from each of the 6 cube faces, and determine whether a single colour is adequate to represent the content, or detect when there is a “thin-wall” issue to deal with (how to deal with it is a topic for another post).

Limitations

As mentioned, I’m still figuring out how to determine when removing a voxel will create a hole in a continuous surface. This can happen even if the voxel is mostly empty, if the surface just clips the corner of its volume. There’s one easy case – if the polygon content doesn’t touch any of the sides of the voxel, then it can safely be removed – but there are a bunch of difficult cases also.

The problem also illustrates an issue with traditional rendering of high polygon tree models. When it’s distant enough that the leaves and thin branches project to less than a pixel, you get a shimmering effect similar to the moire effect on textures when the nearest-neighbour minification filter is used. Unlike textures, this is not easy to work around (besides rendering to a higher resolution and down-sampling I guess). I’ve always assumed that low poly tree models were used solely for performance, but in some cases it looks like they may actually produce a better image.

Posted in Development blog, Sparse Voxel Octree | Comments Off on Voxelising polygon models by rendering slices