Siberia Posted November 17, 2020 Share Posted November 17, 2020 Hello ! I have questions concerning performance and "RenderTexture and/or filter" for a specific case. The context : Our canvas is a big container with a lot of layers, here the order of rendering : 1 - Background image layer (a huge texture) 2 - A tile layer, a container that hold x sprites (furniture, etc.) 3 - A character layer, a container that hold x sprites controlled by players 4 - A lighting layer, container that hold individual animated light sources AND vision sources for characters (PIXI.meshes and custom shaders) 5 - A controls layer that hold x PIXI.Graphics objects Some characters have a nightvision, it wouldn't be a problem if their nightvision wasn't grayscale. To handle the grayscale, we need to turn to gray the background layer, the tile layer and the character layer into the field of vision The option we have retained : 1. Create a RenderTexture on layer 1/2/3 (only when 1/2/3 have changed), and process the texture in layer 5 in a PIXI.Mesh with a custom fragment and vertex shader. 2. Create a RenderTexture on layer 1 only (only when 1 has changed), and use filters on individual sprites in 2/3, only when necessary. Often, it would be less that 15 sprite, but sometimes more that 15. Above all, we are looking for the best performance. Option 1 has big advantages, but an acquaintance tells me that the option with the filters would undoubtedly be more efficient. You see, the "probably" is a problem. But it is true that the layers 1/2/3 can be particularly heavy, with huge textures and a lot of sprites. Do you have any advice on which option to choose? Thanks. Quote Link to comment Share on other sites More sharing options...
ivan.popelyshev Posted November 17, 2020 Share Posted November 17, 2020 probably 1. Filters use temp renderTextures to process stuff. Also that kind of setup is easy to make using pixi-layers: https://pixijs.io/examples/#/plugin-layers/lighting.js , it has special "layer.getRenderTexture()" feature. just swap your container for layer(no need to set parentLayer yet), and use it. Siberia 1 Quote Link to comment Share on other sites More sharing options...
Siberia Posted November 18, 2020 Author Share Posted November 18, 2020 Oh, thanks Ivan! I could steal some code from pixi-layers. we just need the getRenderTexture method : I Need : - LayerTextureCache (without double buffer support) - LayerTextureCache handling in our own PIXI.Container subclass - especially in render method I didn't miss anything? ivan.popelyshev 1 Quote Link to comment Share on other sites More sharing options...
ivan.popelyshev Posted November 18, 2020 Share Posted November 18, 2020 yes, that's right. If you dont need the main feature "sorting elements to layers automatically", you can just take buffering code. Its handy to use it with filters/meshes like i did in https://codesandbox.io/s/tender-franklin-iycmu Siberia 1 Quote Link to comment Share on other sites More sharing options...
Siberia Posted November 19, 2020 Author Share Posted November 19, 2020 (edited) Hey, it's working fine! I just need to calculate a matrix and pass it to the vertex shader to position the texture correctly. By the way, is there a method to calculate a matrix automatically based on the properties of the target container? Another point, you have to make two render calls to display the layer to the screen (in addition to generating the cache). I was wondering if it would be good to use the rendered texture in a sprite rather than calling render twice? And thanks again Ivan! Edited November 19, 2020 by Siberia Quote Link to comment Share on other sites More sharing options...
ivan.popelyshev Posted November 19, 2020 Share Posted November 19, 2020 (edited) > is there a method to calculate a matrix automatically based on the properties of the target container? need more details. I've done it many times, but i dont understand why do you need it, for lighting layers its always screen-sized. > I was wondering if it would be good to use the rendered texture in a sprite rather than calling render twice? Its the same. The difference is you have to be tricky to call render() method inside itself, that's why my code in pixi-layers binds RT and then return previous texture. Also if you get order in tree wrongly, like, sprite first and layer second - you'll see previous frame in sprite Edited November 19, 2020 by ivan.popelyshev Quote Link to comment Share on other sites More sharing options...
Siberia Posted November 19, 2020 Author Share Posted November 19, 2020 1 hour ago, ivan.popelyshev said: need more details. I've done it many times, but i dont understand why do you need it, for lighting layers its always screen-sized. Here an example. The background is a container holding n container, which have all a zIndex. - The background, natural elements, objects, etc. - n Characters - n Line of sight (a pixi mesh, bound to characters) Basically, I need to get a render texture from the background, and pass it to meshes (n meshes). Meshes are in fact quad, with specific shaders to render the line of sight. I just need to pass a portion of the render texture to the mesh, where the texture will be rendered in grayscale. Quote Link to comment Share on other sites More sharing options...
ivan.popelyshev Posted November 19, 2020 Share Posted November 19, 2020 (edited) ok, so, renderTexture will have the size of screen. In the mesh shader you need world coords, you have to pass "translationMatrix * vertexPosition" part to varying , so fragment shader knows where is this pixel on screen Edited November 19, 2020 by ivan.popelyshev Quote Link to comment Share on other sites More sharing options...
ivan.popelyshev Posted November 19, 2020 Share Posted November 19, 2020 (edited) Oh, right, you need normalized coords. Add screen width/height to uniforms and divide on them in vertex. Do not use "gl_FragCoord" thingy because you dont really know where are you rendering, there might be a filter on top of everything Edited November 19, 2020 by ivan.popelyshev Quote Link to comment Share on other sites More sharing options...
Siberia Posted November 20, 2020 Author Share Posted November 20, 2020 Ok, so, i think I have a brain lock. This is often the case when i'm working with matrix and projection ? Here the vertex : precision mediump float; attribute vec2 aVertexPosition; attribute vec2 aUvs; uniform mat3 translationMatrix; uniform mat3 projectionMatrix; uniform vec2 canvasDimensions; varying vec2 vUvs; varying vec2 vSamplerUvs; void main() { vUvs = aUvs; vSamplerUvs = ((translationMatrix * vec3(aVertexPosition, 1.0)).xy - (mesh position?)) / (canvasDimensions?); gl_Position = vec4((projectionMatrix * translationMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0); } Quote Link to comment Share on other sites More sharing options...
ivan.popelyshev Posted November 20, 2020 Share Posted November 20, 2020 (edited) translationMatrix * aVertexPosition is pixel position. (0,0) left-top, (width,height) right-bottom. You have to divide it by canvasDimensions to get normalized coords for full-screen renderTexture. Edited November 20, 2020 by ivan.popelyshev Siberia 1 Quote Link to comment Share on other sites More sharing options...
Siberia Posted November 20, 2020 Author Share Posted November 20, 2020 The PIXI shaman spoke, we listen. ? It works!! Thanks Ivan! ivan.popelyshev 1 Quote Link to comment Share on other sites More sharing options...
Siberia Posted November 25, 2020 Author Share Posted November 25, 2020 (edited) It's me again with a little question. ? I try to map my uv coord to the sampler coord in my fragment shader, but i have problem when i move or zoom in zoom out the mesh. I tried this : - Put MeshDimensions as a uniform to the fragment shader - Pass translationMatrix as varying to the fragment To map my uv coord to the sampler coord, i'm doing this in the fragment : vec2 mappedCoord = (vec3(uv * meshDimensions,1.0) * translationMatrix).xy / canvasDimensions); But it dont work... i know how to do it in custom shader for filters, but not for a mesh. Edited November 25, 2020 by Siberia Quote Link to comment Share on other sites More sharing options...
ivan.popelyshev Posted November 25, 2020 Share Posted November 25, 2020 (edited) You overcomplicated things. I dont know why do you need meshDimensions in first place. Why "translationMatrix * aVertexPosition/canvasDimensions" didnt work for you? At this point I have to ask you to make minimal demo Edited November 25, 2020 by ivan.popelyshev Quote Link to comment Share on other sites More sharing options...
Siberia Posted November 25, 2020 Author Share Posted November 25, 2020 6 minutes ago, ivan.popelyshev said: You overcomplicated things. I dont know why do you need meshDimensions in first place. Why "translationMatrix * aVertexPosition/canvasDimensions" didnt work for you? At this point I have to ask you to make minimal demo Yeah, it works well ? but with some meshes, I need to apply specific effects. Waving the sampler from the center of the meshes. So in this specific case, I Just need to map the mesh uv coords to the sampler coord. I'm preparing a demo ? Quote Link to comment Share on other sites More sharing options...
ivan.popelyshev Posted November 25, 2020 Share Posted November 25, 2020 anyway, i invited you to pixijs slack , look at your email. Join #shaders and #show-and-tell Siberia 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.