Consider a green forest canopy over a brown dirt forest floor.
The voxelisation process will convert it to a sheet of brown voxels, with a sheet of green voxels over the top. As it calculates the down-sampled versions for the higher nodes in the octree, it will eventually merge the two sheets together, producing a single green-brown sheet.
This is the “Thin Wall” problem, and the issue is that a single voxel is trying to represent content that looks different when viewed from different directions – in this case above and below. In practice it means that the canopy will appear green when viewed from above, until the camera reaches a certain distance, at which point it will suddenly turn green-brown.
My plan is to change the voxelisation process, so that it calculates a colour for each cube face, representing the the content as viewed down the corresponding axis. This is still an approximation because it doesn’t cover all the possible angles a voxel can be viewed from, but should be an improvement. The cube face data will be an intermediate data structure, and will facilitate detection of “Thin Wall” while processing voxel data. The final output will still be a regular 1-colour-per-voxel octree.
In the top right box the top cube face is green, and the bottom is brown. We’ve detected “Thin Wall”. But don’t have a strategy to deal with it yet.
We can remove some ambiguity by removing hidden cube faces from consideration.
Firstly, when the polygon content is converted into voxels, we can detect when looking down an axis presents a view of the back-face of a polygon. In this case we know that the voxel should never actually be viewed from this direction – presumably it will be occluded by other voxels.
Secondly, if two voxels are hard up against each other, the touching faces will never be visible, and can both be excluded.
(Right => Down sample)
(Down => Remove hidden cube faces. Or convert to final SVO.)
Removing hidden cube faces produces a single sided sheet of brown voxels, visible from above but not below, with a double sided sheet of green voxels above it.
As we progressively down-sample and remove hidden faces, the brown sheet and the down-facing side of the green sheet eventually press together and cancel each other out. The result is a single sided green sheet visible from above only, which can be represented (unambiguously) by a green voxel.
Essentially the “Thin Wall” problem has disappeared.
This makes sense. The problem was that the voxel was trying to represent content that is green when viewed from above and brown when viewed from below. By recognising the camera can never view the voxel from below, we’ve removed that case and the ambiguity is gone. The down-sampled octree node can be coloured green and will look correct when viewed from above.
Thin Wall general case
It should be noted that this only solves a single specific case of “Thin Wall”, which is really not “Thin Wall” at all once you take into account the valid camera angles. It does not solve the “Thin Wall” general case.
The intermediate data structure recognises when content can only be viewed from specific angles. This does not result in a magical octree where voxels will only appear when viewed from valid camera positions. It’s still just a plain old one-colour-per-voxel octree, we’re just calculating the most correct colours for the valid set of camera positions.
In this case if we were to move the camera underground, the ground would appear brown, until the camera was a certain distance away, then switch to green. This is acceptable behaviour. We’re viewing the content from an invalid camera position, as indicated by the one-sided ground surface in the original polygon mesh. If this was (somehow) a valid camera position, the ground surface should be made double-sided to indicate this. The voxel processing would then attempt to produce SVO nodes that look correct from above and below. (Of course then it would run into the thin-wall issue again.)