nikokoneko Posted February 11, 2017 Share Posted February 11, 2017 Hi, I noticed that Babylon's implementation of VR camera rig is such that it calculates distortion correction inside of fragment shader. Since the calculations are done per pixel, it results in a steep performance drop, especially on high-density screens, which renders the rig unusable for any mobile phone. On a simplest of scenes, I get only 30fps on Google Pixel. I wonder why this particular method has been chosen over, say, displaying the rendered texture on a dense plane (20x20) and then performing all calculations by vertex of that plane. With this method we would be performing calculations some 400 times per eye (on a 20x20 mesh), versus over 900 000 times (for each pixel on a QHD screen, for example). What I am referring to is the 2nd approach described here: http://smus.com/vr-lens-distortion/ Both WebVR polyfill and Google VR View use this method and I notice no performance drop AT ALL when running their examples. The reason I ask is because I am thinking of developing this method for Babylon, simply because current pixel-based implementation is unfortunately completely unusable. But before I start I'd, like to know if there is some underlying problem, inherent to Babylon, to implement this method? Thanks Quote Link to comment Share on other sites More sharing options...
GameMonetize Posted February 13, 2017 Share Posted February 13, 2017 Hello, we started developing the distortion correction at pixel level because it was easier to test and implement. But now you are completely right it comes at an expensive cost. I would really appreciate your help if you can provide per vertex implementation (Which could be turned on/off) We can even think about using it as default and allow users to opt in to use the per pixel version Quote Link to comment Share on other sites More sharing options...
nikokoneko Posted February 14, 2017 Author Share Posted February 14, 2017 I'd like to help. Right now I'm running on several deadlines, but when I finish I will take a stab at it. Before I start diving into thousands of lines of the current Babylon js code, where would you recommend me to look at? I should have no problem writing the actual shader *, but I don't know anything on how PostProcess is currently implemented in Babylon, I guess I should start there? (EDIT: actually Google has posted the actual shaders as a part of their WebVR polyfill) Quote Link to comment Share on other sites More sharing options...
JCPalmer Posted February 14, 2017 Share Posted February 14, 2017 Definitely look at Camera.ts. That is where sub-cameras are defined, and where the rigs are set up. Implementing regardless of whether movement is determined by accelerometer or a gamepad is a much better design. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.