BMWPilote Posted December 4, 2018 Share Posted December 4, 2018 Hi Folks, Does anyone know whether it is possible to create 32bit floating point buffer on IOS? It not possible, is there a way to avoid losing too much precision by using 16 bit floating point buffer? Thanks Quote Link to comment Share on other sites More sharing options...
trevordev Posted December 4, 2018 Share Posted December 4, 2018 Last time I checked on iPhone, I believe you are correct that its not supported. You can check this with ( scene.getEngine().getCaps().textureFloatRender ). One option might be to use multiple 16 bit float buffers and store pieces of your result across all of them in your shader. Quote Link to comment Share on other sites More sharing options...
BMWPilote Posted December 5, 2018 Author Share Posted December 5, 2018 13 hours ago, trevordev said: Last time I checked on iPhone, I believe you are correct that its not supported. You can check this with ( scene.getEngine().getCaps().textureFloatRender ). One option might be to use multiple 16 bit float buffers and store pieces of your result across all of them in your shader. OK I understand. Thanks for the idea. But does bitwise shift work on such device like iPhone? Quote Link to comment Share on other sites More sharing options...
BMWPilote Posted December 5, 2018 Author Share Posted December 5, 2018 14 hours ago, trevordev said: Last time I checked on iPhone, I believe you are correct that its not supported. You can check this with ( scene.getEngine().getCaps().textureFloatRender ). One option might be to use multiple 16 bit float buffers and store pieces of your result across all of them in your shader. Could you please help me more on that? I need a 32 bit normal buffer. How can I exactly use two 16 bit normal buffers and then combine them in a fragment shader? Quote Link to comment Share on other sites More sharing options...
trevordev Posted December 5, 2018 Share Posted December 5, 2018 I havn't done this myself but you could take a look at this thread to get some ideas: https://stackoverflow.com/questions/18453302/how-do-you-pack-one-32bit-int-into-4-8bit-ints-in-glsl-webgl @Sebavan might know more if you get stuck. Quote Link to comment Share on other sites More sharing options...
Sebavan Posted December 5, 2018 Share Posted December 5, 2018 Yup we are relying on the same kind of code in shadows for instance to store depth on 4 bytes to simulate a 32 bits float depth buffer ? Quote Link to comment Share on other sites More sharing options...
BMWPilote Posted December 6, 2018 Author Share Posted December 6, 2018 6 hours ago, Sebavan said: Yup we are relying on the same kind of code in shadows for instance to store depth on 4 bytes to simulate a 32 bits float depth buffer ? So if I would like to simulate a 32 bit normal buffer, I will need 3 unsigned int buffers? ? Quote Link to comment Share on other sites More sharing options...
BMWPilote Posted December 6, 2018 Author Share Posted December 6, 2018 6 hours ago, Sebavan said: Yup we are relying on the same kind of code in shadows for instance to store depth on 4 bytes to simulate a 32 bits float depth buffer ? Could you please point me out where I can find the relative code in the repo of Babylonjs? Quote Link to comment Share on other sites More sharing options...
Sebavan Posted December 6, 2018 Share Posted December 6, 2018 Pack: https://github.com/BabylonJS/Babylon.js/blob/master/src/Shaders/shadowMap.fragment.fx#L2 Unpack: https://github.com/BabylonJS/Babylon.js/blob/master/src/Shaders/ShadersInclude/shadowsFragmentFunctions.fx#L3 Hope that helps Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.