Sword of Moonlight > Devs

Ordered transparency blending

(1/2) > >>

Holey Moley:
I've been begrudgingly writing code to make a mess of everything in order to try to make overlapping transparent polygons blend consistently.

I'm afraid this is going to open up several cans of worms, but it's something I think would further improve the level of quality experienced with SOM. This is one of those things that modern day games can't really do I think, because they try to choke down too many polygons. So it's an area where SOM should be able to shine in theory.

But it poses a lot of problems. I've been trying to work on this for more than a week. I thought I could use my 3D modeling software project as a reference, however I found out that its transparency sorting was botched (not by me) and so I've had to be completely redoing it before I could do SOM.

It uses a BSP (binary space partitioning) approach to this, which I think is the only correct approach. But it causes many problems. For one, it has to split polygons so that they can be sorted. If they're sticking into each other there's no way to blend them consistently.

Traditionally this is done in terms of recursive half-spaces. But that will cause a problem for the do_aa effect (cracks) so at a minimum it will probably want every polygon to be split against every other polygon. Splitting is just against their plane. They're grouped into planes. But planes extend infinitely (and it's probably not practical to limit them, although maybe in simple ways would work) and so this can cause a lot of slicing and splintering.

I expect that this splintering won't be stable since I'm intending to just do this every frame for visible polygons. It's possible to try to precompute splits for everything. Which might be nice because it could just be done once. But it would have to be updated if anything moves, and would cause more fragmentation maybe from things off screen.

This will probably cause temporal artifacts. One way that might solve it is to switch over to per-pixel lighting. I don't know how much that will effect performance if so. But it probably wouldn't look a whole lot better, but long term I figure this is inevitable,  and I don't plan to maintain both vertex lighting and per pixel lighting, so this may be the project that forces that change if so. (Edited: I now have doubts about if per-pixel lighting will really help temporal artifacts. I'm afraid it won't be a final solution. But it might help make it less noticeable.)

The code for doing this is pretty complicated and I don't know how much I'm going to do to try to optimize it. The naive way to do it will just treat the whole scene as one transparent polygon soup. It will probably be better to slice it up into islands, but that involves a lot of complication and risks too. It may not matter much given our low-poly targets. But it will cause extra slicing and the amount of work is nonlinear, so it could balloon and really swamp the CPU, and if so it will depend on how much transparency is on screen. Highly transparent scenes could not be sliced up into islands anyway.

I've already written quite a lot of code for this, but it's a pretty big project. I hope a day or 2 more coding will get me to a point where I can do tests. I'm starting with level geometry, which only my KF2 project is known to be using. It has a big waterfall that suffers from not sorting its polygons. They're currently sorted to the tile, but not to the polygon. It's something I hope to achieve, but it's not fun. Just figuring out where to start was enough to make me want to avoid it until I could no longer delay.

Anyway, I figure I shouldn't just leave this forum in radio silence until I can finish this. And I'll probably have more things to talk about before I'm done. I don't know if this will be a release. I'm afraid I won't be happy with the results and may have to put it on hold.

Holey Moley:
I'm realizing to finish this I'm going to have rewrite the low-level model display subroutines from scratch. It's probably a reasonable opportunity to consider re-engineering the whole thing. I've wanted to avoid that to maintain SOM's core identity, but I do think having correct transparency is important to that identity, and I don't think it will make sense to have code on hand and not take advantage of it. And there's a lot of screwball extension code from back in the day that should probably not be implemented at the level it is, placed between the Direct3D API invocations.

I'm just starting to see the bigger picture here. It seems like this could be a natural evolution point.

Update: By nightfall I got the effect working for level geometry. I don't see any problems with my KF2 project's waterfall scene. It only uses directional light however. I'm pretty concerned there's going to be unsolvable issues at some stage.

Holey Moley:

This is proving difficult to solve with do_aa. It pretty much requires every triangle to be split along the plane of every other triangle, and that seems to be too much to do on the CPU.

I had to ignore the fade in (spawn) effect for purposes of transparency sorting or else the system is overloaded when everything fades in at once, or even just when new areas are entered. (Edited: This means that spawning monsters will be blended in first, which might be the wrong order if they spawn right in front of you somehow, but usually they spawn on the furthest edge or inside fog. It might be worth spot fixing close up spawns later.)

I hit on a bit of luck by accident today because I tried to micro-optimize some simple logic and left out some parentheses (in code) that resulted in triangles to not being split at all. That actually turned out to be a possible path forward, because it still kind of works (good enough) but obviously it has glitches where the triangles ought to be split to implement BSP algorithm correctly.

What I'm thinking is to do the all-against-all at the level of individual objects and same textures parts of the map, and hope that's just "good enough" for do_aa. I don't know to be honest.

I'm a little afraid I'm either heading down a dead end hallway or somehow this will work out to net good. Either way it looks like the beginning of a project to overhaul (and cleanup/simplify/modernize) player's rendering code, that could be good just to have that out of the way.

I'm hoping I can do a proof of concept, and then look at doing the splitting in x2mdl and MapComp respectively if so, so that it doesn't have to be done every time things are sent to the GPU.

So far I haven't noticed artifacts caused by inserting vertices. I think they will appear though with point light sources. This proposal to make the cuts in advance would be stable in that regard. So I think this is the path forward. I think another benefit of this is if I do implement automatic crack-healing (do_aa) in MapComp maybe it can fill in any cracks caused by splitting too. (Edited: In case anyone is wondering, with MapComp it would be worth trying to separate parts of the map that don't touch each other for this purpose. Just in terms of the 2D grid if so most likely.)

Holey Moley:
more details

I think how this can work is to precompute some parametric data for splitting triangles because the unsplit triangles have to be "transformed" into world space and it would just be extra work to split the models in advance because then any such split points have to be transformed as well, and it's pretty costly to do that math on the CPU.

Plus it's not possible to split animated models in advance because they change in real-time. And it's impossible to add the split points to the animation, and if they were built-in they'd have to be animated too.

But the splits will not change to match the animation frame, so they won't be a perfect match, and that will just be in keeping with the "good enough" approach.

the existing drawing code uses a data structure that includes a 32-bit field for depth-sorting. Luckily because of that a 32-bit pointer can fill that memory to transport this new splits data.

Model splits will be stored in the MDO format's new extension system. Level geometry will have to be stored somewhere in the MPX/MPY files if not just computed at runtime. Precomputing should have the benefit of being able to match up vertices to be able to reuse them.

I'm trying to do this now. These days I'm not working on SOM everyday. I can't say I'm working on it full-time. Not on programming at least. Unfortunately. I'm still trying to hit my goals for the end of the year, but this project itself is proving to be a lot. I'm definitely not working on anything that isn't directly visible in my KF2 demo on itch.io until next year.

Holey Moley:
make or break

It only took 2 solid days to write some parametric splitting code to see if this will even work. I'll probably debugging and testing it tomorrow. My hope is KF2's crystal flasks will look alright since they're pretty complicated and are unacceptable without this additional work.

I'm pretty nervous because if this doesn't work I've written a lot of code and I don't know what if anything is left to try. It will take some soul searching, that I'd rather not have to. I don't know if there's anything else left at the bottom of my bag of tricks this time :crossfingers:


[0] Message Index

[#] Next page

Go to full version