Klaas Posted October 27, 2016 Share Posted October 27, 2016 Hi Everyone, im currently trying to get the fragment world positon in a post process shader. But after hours of fiddling around with the inverted matrices i have no clou what em im doing wrong. This is my current state ... http://www.babylonjs-playground.com/#KZJXZ Can anyone give me a hint or point me in the right direction? thanks, Klaas Quote Link to comment Share on other sites More sharing options...
Nabroski Posted October 30, 2016 Share Posted October 30, 2016 Hello Please look here. Maybe it helps you. Make a playground search. Feel free ask more questions if you feel to.http://doc.babylonjs.com/playground?q=.getDepthMap() Quote Link to comment Share on other sites More sharing options...
NasimiAsl Posted October 30, 2016 Share Posted October 30, 2016 hi vec3 pos1 = (world*vPosition).xyz; // world position in the Fragment SHader if(gl_FragCoord.y > 0.) pos1 = vPosition.xyz; gl_FragColor = vec4( pos1, 1. ); http://www.babylonjs.com/cyos/#13EJWN if you don't change your mesh position (or rotation or scaling ) world position = Local Position so both is same result http://www.babylonjs-playground.com/#1YR2VY#1 http://www.babylonjs-playground.com/#1YR2VY#3 // with rotation and scale you dont have this data in postprocess but we can calculate that with color System this is not exactly result . detect world Position in postprocess we use one render target so we have limitaion for x , y , z 0 -> 255 and when you use this range you have exactly position (no float point ) http://www.babylonjs-playground.com/#1YR2VY#10 for understanding shader Builder : Nabroski 1 Quote Link to comment Share on other sites More sharing options...
Klaas Posted November 17, 2016 Author Share Posted November 17, 2016 Hi there, thanks for your help! But the main reason why all my efforts weren't lucky is that the OpenGL projection matrix gives out non linear depth informations. Why? I dont know! Perhaps they made it so that near objects had a better z-resolution. So i'd had to modifiy the depthmap renderer to get my linear depth value. This is the Vertex-Shader // Attribute attribute vec3 position; attribute vec3 normal; #include<bonesDeclaration> // Uniform #include<instancesDeclaration> uniform mat4 viewProjection; uniform mat4 view; varying vec4 worldCoords; //world coordinates for the fragment shader varying vec3 vNormalW; //normal of surface for the fragment shader #if defined(ALPHATEST) || defined(NEED_UV) varying vec2 vUV; uniform mat4 diffuseMatrix; #ifdef UV1 attribute vec2 uv; #endif #ifdef UV2 attribute vec2 uv2; #endif #endif void main(void) { #include<instancesVertex> #include<bonesVertex> gl_Position = viewProjection * finalWorld * vec4(position, 1.0); //here i get the vertex world position worldCoords = view * finalWorld * vec4(position, 1.0); //here i get the surface normal vNormalW = normalize(vec3(finalWorld * vec4(normal, 0.0))); #if defined(ALPHATEST) || defined(BASIC_RENDER) #ifdef UV1 vUV = vec2(diffuseMatrix * vec4(uv, 1.0, 0.0)); #endif #ifdef UV2 vUV = vec2(diffuseMatrix * vec4(uv2, 1.0, 0.0)); #endif #endif } and here the fragment shader ... were all the magic happens ... The Fragment Shader varying vec4 worldCoords; varying vec3 vNormalW; #ifdef ALPHATEST varying vec2 vUV; uniform sampler2D diffuseSampler; #endif uniform float far; void main(void) { #ifdef ALPHATEST if (texture2D(diffuseSampler, vUV).a < 0.4) discard; #endif //i only need the depth value .... gl_FragColor.a = abs(worldCoords.z) / far; // so i could use the rest of the color channels to store the surface normal gl_FragColor.xyz = (vNormalW.xyz + 1.0) / 2.0; } ... and then in my post process ... Post Process Fragment Shader // Parameters uniform vec3 extends; //the maximal values in x,y and z direction uniform mat4 mView; //inverted view matrix . . . void main(void){ gl_FragColor = texture2D(textureSampler, vUV); //depth renderer output vec4 pInfo = texture2D(depthSampler, vUV); vec4 d; //bring the 0 to 1 viewport coords to -1 to 1 and multiply by depth //this gives you the clipspace coords d.xy = (vUV.xy * 2.0 - 1.0) * pInfo.a; //the linear clipspace coords multiplied by the directional maximals //this gives you the view coords d.xy *= extends.xy; d.z = pInfo.a * extends.z; d.w = 1.0; //view coords multiplied by the inverted view matrix //this gives you woorld coords vPositionW = vec4(mView * d).xyz; //decode normal vec3 vNormalW = pInfo.xyz * 2.0 - 1.0; ... so it all come out nicely. It even gives me the space for the surface normal wich i can now use to make some nice post effect lighting! this is a very early screenshot of my project. The layered fog, volumetric pointlights, beams, beamlights and unit range marker are drawn in the posteffect. greetings, Klaas UPDATE: I've meant the "projection Matrix" is non linear ... not the viewport matrix! NasimiAsl, Nabroski, GameMonetize and 1 other 4 Quote Link to comment Share on other sites More sharing options...
GameMonetize Posted November 18, 2016 Share Posted November 18, 2016 Seems really great! Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.