39
« on: October 10, 2022, 12:16:16 AM »
I've been begrudgingly writing code to make a mess of everything in order to try to make overlapping transparent polygons blend consistently.
I'm afraid this is going to open up several cans of worms, but it's something I think would further improve the level of quality experienced with SOM. This is one of those things that modern day games can't really do I think, because they try to choke down too many polygons. So it's an area where SOM should be able to shine in theory.
But it poses a lot of problems. I've been trying to work on this for more than a week. I thought I could use my 3D modeling software project as a reference, however I found out that its transparency sorting was botched (not by me) and so I've had to be completely redoing it before I could do SOM.
It uses a BSP (binary space partitioning) approach to this, which I think is the only correct approach. But it causes many problems. For one, it has to split polygons so that they can be sorted. If they're sticking into each other there's no way to blend them consistently.
Traditionally this is done in terms of recursive half-spaces. But that will cause a problem for the do_aa effect (cracks) so at a minimum it will probably want every polygon to be split against every other polygon. Splitting is just against their plane. They're grouped into planes. But planes extend infinitely (and it's probably not practical to limit them, although maybe in simple ways would work) and so this can cause a lot of slicing and splintering.
I expect that this splintering won't be stable since I'm intending to just do this every frame for visible polygons. It's possible to try to precompute splits for everything. Which might be nice because it could just be done once. But it would have to be updated if anything moves, and would cause more fragmentation maybe from things off screen.
This will probably cause temporal artifacts. One way that might solve it is to switch over to per-pixel lighting. I don't know how much that will effect performance if so. But it probably wouldn't look a whole lot better, but long term I figure this is inevitable, and I don't plan to maintain both vertex lighting and per pixel lighting, so this may be the project that forces that change if so. (Edited: I now have doubts about if per-pixel lighting will really help temporal artifacts. I'm afraid it won't be a final solution. But it might help make it less noticeable.)
The code for doing this is pretty complicated and I don't know how much I'm going to do to try to optimize it. The naive way to do it will just treat the whole scene as one transparent polygon soup. It will probably be better to slice it up into islands, but that involves a lot of complication and risks too. It may not matter much given our low-poly targets. But it will cause extra slicing and the amount of work is nonlinear, so it could balloon and really swamp the CPU, and if so it will depend on how much transparency is on screen. Highly transparent scenes could not be sliced up into islands anyway.
I've already written quite a lot of code for this, but it's a pretty big project. I hope a day or 2 more coding will get me to a point where I can do tests. I'm starting with level geometry, which only my KF2 project is known to be using. It has a big waterfall that suffers from not sorting its polygons. They're currently sorted to the tile, but not to the polygon. It's something I hope to achieve, but it's not fun. Just figuring out where to start was enough to make me want to avoid it until I could no longer delay.
Anyway, I figure I shouldn't just leave this forum in radio silence until I can finish this. And I'll probably have more things to talk about before I'm done. I don't know if this will be a release. I'm afraid I won't be happy with the results and may have to put it on hold.