136
Projects, demos, and games Information / Re: STICKY: 2020 King's Field II port progress report
« on: May 28, 2021, 05:04:55 PM »
In the past couple days I had some success with VR stuff: 2) in reverse order, I got 2.25x supersampling (1.5 times each dimension resolution wise) to make a nice looking picture (attached, left eye only) by adjusting the mipmap sample point down to 50%. I could write a long post about my thoughts on this but I'm going to assume anyone reading this just wants to hear the news and not the technical breakdown. (In the picture notice that the lettering is smooth, especially on the diagonals on M. I assume if they're smooth the rest must appear smooth too.)
1) in reverse order, yesterday something on my mind made me to revisit how the eyes (twin pictures) are accomplished since a little while back I figured out, inadvertently (I'll spare you the details, I wrote them somewhere around here a while back) clipping pixels in the pixel shader is pretty costly, and I reckoned it would be much better to clip before the pixel phase. I first tried a "user clip plane" because the plane was down the middle of the screen, and it seemed to work but didn't quite, and I'm not sure why (without going into details) but generally it's advised to not use this, and especially on Intel, which my PC is/was. At some point I (stupidly) realized the 2D "scissor" clip option was what should be used since it's just a simple 2D clip, and I'd already used it for text at one point because the text is done by D3DX which doesn't let you use shaders at all. This helped with the frame rate significantly (anything around 10% is pretty significant) mainly because it does a better job of eliminating pixels from up close map geometry that's just on the other side of your nose.
EDITED: I realize I should've gone more "in depth" to explain the reason clipping was done in the pixel shader before is I originally set up stereo (two eyes) to use "instancing" so that both eyes are drawn at the same time. That sounds great, but I think at that point the data was still being uploaded to the GPU so instancing would cut the memory bus burden in half. But shortly after I set up a system that juggles temporary buffers in video memory that helped SOM with performance a lot. So at that point instancing wasn't a matter of uploading since only one copy of the data would be uploaded either way. So this new way just draws everything twice and switches the "scissor" clip from eye to eye for every little thing SOM displays, i.e. a skeleton's femur, etc. It would probably be better to do all the bones at once and do the animation on the CPU but that's another story.
So (1) helped with frame rates, and that got me wondering if I could figure out (2) to make the picture look good and hit a stable 60hz. And it did, on my system at least. But I still have to hook it up to the PlayStation VR since I recall sometimes the frame rate is affected by having to take in the input from its USB that's pretty intense, although in theory if the GPU is the limiting factor that shouldn't matter. In practice on this system the CPU and GPU share memory, so that might be why, or heat.
That costly clipping thing is still done for every pixel to poke holes in the textures (like for grass and things) so I know I could rollup my sleeves and add a system to make a twin for every shader so that that clipping could be eliminated when textures don't have holes, and that would make a comfy cushion to guarantee no frame rate blips. (Edited: A big reason I wanted to eliminate the eye clipping is this kind of VR would not benefit from eliminating the texture clipping if it was doing eye clipping, since it's the simple act of clipping that causes the GPU to perform badly, unfortunately. It has to do with the GPU not being able to make assumptions about where live pixels will end up in the depth-buffer.)
Anyway, the main story here is the supersampling, since that makes it actually look better. But it's good to get the frame rate down further to eventually be able to use a PSVR with very inexpensive computers. And hopefully once I get my new computer set up with a new high end head set I can just keep the PSVR plugged into this smaller computer. Actually my current computer is like cube that fits in the palm of your hand...whereas my new computer is enormous: https://www.amazon.com/Empowered-Grandia-GeForce-Workstation-Computer/dp/B08MQM8TXP/ref=sr_1_3 (NOTE: I paid well south of $1000 for mine, most of the price on that is inflated for the RTX graphics card, which I was able to swap out for a Vega APU. I'd like an RTX card for it but they're only supposed to cost $600 except you can't buy them for less than twice or thrice the MSRP right now because of demand for transistors. I'm thinking about going ahead and getting a head set since I think maybe the APU can handle it for SOM except I'm kind of bummed that VR controllers don't have enough index finger buttons to play SOM games, so I don't like the idea of buying controllers I'll likely never use... I like the announced PS5 VR controllers, but I think they won't come out until a head set appears which seems unlikely to happen anytime soon.)
1) in reverse order, yesterday something on my mind made me to revisit how the eyes (twin pictures) are accomplished since a little while back I figured out, inadvertently (I'll spare you the details, I wrote them somewhere around here a while back) clipping pixels in the pixel shader is pretty costly, and I reckoned it would be much better to clip before the pixel phase. I first tried a "user clip plane" because the plane was down the middle of the screen, and it seemed to work but didn't quite, and I'm not sure why (without going into details) but generally it's advised to not use this, and especially on Intel, which my PC is/was. At some point I (stupidly) realized the 2D "scissor" clip option was what should be used since it's just a simple 2D clip, and I'd already used it for text at one point because the text is done by D3DX which doesn't let you use shaders at all. This helped with the frame rate significantly (anything around 10% is pretty significant) mainly because it does a better job of eliminating pixels from up close map geometry that's just on the other side of your nose.
EDITED: I realize I should've gone more "in depth" to explain the reason clipping was done in the pixel shader before is I originally set up stereo (two eyes) to use "instancing" so that both eyes are drawn at the same time. That sounds great, but I think at that point the data was still being uploaded to the GPU so instancing would cut the memory bus burden in half. But shortly after I set up a system that juggles temporary buffers in video memory that helped SOM with performance a lot. So at that point instancing wasn't a matter of uploading since only one copy of the data would be uploaded either way. So this new way just draws everything twice and switches the "scissor" clip from eye to eye for every little thing SOM displays, i.e. a skeleton's femur, etc. It would probably be better to do all the bones at once and do the animation on the CPU but that's another story.
So (1) helped with frame rates, and that got me wondering if I could figure out (2) to make the picture look good and hit a stable 60hz. And it did, on my system at least. But I still have to hook it up to the PlayStation VR since I recall sometimes the frame rate is affected by having to take in the input from its USB that's pretty intense, although in theory if the GPU is the limiting factor that shouldn't matter. In practice on this system the CPU and GPU share memory, so that might be why, or heat.
That costly clipping thing is still done for every pixel to poke holes in the textures (like for grass and things) so I know I could rollup my sleeves and add a system to make a twin for every shader so that that clipping could be eliminated when textures don't have holes, and that would make a comfy cushion to guarantee no frame rate blips. (Edited: A big reason I wanted to eliminate the eye clipping is this kind of VR would not benefit from eliminating the texture clipping if it was doing eye clipping, since it's the simple act of clipping that causes the GPU to perform badly, unfortunately. It has to do with the GPU not being able to make assumptions about where live pixels will end up in the depth-buffer.)
Anyway, the main story here is the supersampling, since that makes it actually look better. But it's good to get the frame rate down further to eventually be able to use a PSVR with very inexpensive computers. And hopefully once I get my new computer set up with a new high end head set I can just keep the PSVR plugged into this smaller computer. Actually my current computer is like cube that fits in the palm of your hand...whereas my new computer is enormous: https://www.amazon.com/Empowered-Grandia-GeForce-Workstation-Computer/dp/B08MQM8TXP/ref=sr_1_3 (NOTE: I paid well south of $1000 for mine, most of the price on that is inflated for the RTX graphics card, which I was able to swap out for a Vega APU. I'd like an RTX card for it but they're only supposed to cost $600 except you can't buy them for less than twice or thrice the MSRP right now because of demand for transistors. I'm thinking about going ahead and getting a head set since I think maybe the APU can handle it for SOM except I'm kind of bummed that VR controllers don't have enough index finger buttons to play SOM games, so I don't like the idea of buying controllers I'll likely never use... I like the announced PS5 VR controllers, but I think they won't come out until a head set appears which seems unlikely to happen anytime soon.)