JCPalmer Posted May 2, 2016 Share Posted May 2, 2016 DB, The problems related to pressing buttons is 2.4-alpha not working the same way related to picking. There is another thread I made, to avoid clogging up yours. On a device with a touch screen, you just have to keep pressing in slightly a different place. (Good thing this was caught in Alpha). If you are running on a device with a mouse, avoid hitting a letter. VR almost always works, cause it is the shortest description. I could make a 2.3 version if you really need it, but post processing will never work in 2.3. I have not pushed this to the google drive hosted page yet, but I have also redesigned how cameras keep track of the post processes to run. No more double indexing. That code was a nightmare / Rube Goldberg contraption. Ripped out about 60 lines, but still had same bizarre results. Then I noticed everything got fixed as soon as I resized the window. Added a markTextureDirty() to PostProcess class, and everything now works. Planning on PR'ing the resigned version, since I deleted earlier changes. One of the images in the 3 stereo rigs (side-by-side, cross eyed, & over under) disappears when a post process is added, so they need something to change still. Have also add the ability to switch between Arc Rotate camera & Target Camera (called Free since takes up less space). These 2 branches in the Camera class tree handle rigs differently. Good thing, since Anaglyph camera looks wildly different between the 2 types. Plan on more usability enhancements, like FOV & interaxial distance animation, if time. Maybe add buttons to go up-down 10 clicks where: the form would disappear, then a delay so you can get the thing back on your head maybe have the model count off each of the clicks, since have to test voice sync outside of the its dev page anyway. (in my deepest voice for max creepiness) At the end, form reappears Going to push what I got so far to google drive @ end of day, today. This is still a work in progress though. Warning: bullshit follows. The only way I could see needing to do reverse ray tracing is if the display is curved & corresponding bigger than your eye for how far it is away going to be from it. I can see how this might give better results than some cheapo distortion. Trying to use OpenGL to drive this seems difficult for me to imagine. Vertex shaders try to project fragments/triangles onto a flat framebuffer, and WebGL / javascript / BJS would seem to be just piling whip cream on top of "you know what". Quote Link to comment Share on other sites More sharing options...
MasterSplinter Posted May 2, 2016 Share Posted May 2, 2016 On 4/22/2016 at 3:00 PM, jessepmason said: Thats pretty cool that weta is looking to view all their 3d assets through the browser, if any framework its definitely babylon! I perfer plugging the phone into the usb now that way you can take advantage of the picker in the center of the display to select things etc. As for code its rather simple with babylon so I dont think there is really anything to point out to get you started. But there is probley someone with more experience using babylon might say differently. I find using mobile webvr is great for a few things, watching 360 movies, panoramas, even browsing the internet is awesome! But using it with babylon just isnt there yet, I haven't seen a babylon or a three js scene where I dont feel sick using it. So just note your vr experience isn't going to be perfect compared to making a .apk definitely would be interested in seeing whatever you come up with! It's not babylon it's webGL! jessepmason 1 Quote Link to comment Share on other sites More sharing options...
JCPalmer Posted May 3, 2016 Share Posted May 3, 2016 Ok, picking is now solved. Check the other thread, if more is wanted regarding this. What is weta, a public Television station in D.C.? Must be something else. Quote Link to comment Share on other sites More sharing options...
GameMonetize Posted May 3, 2016 Share Posted May 3, 2016 http://wetaworkshop.com/ Quote Link to comment Share on other sites More sharing options...
dbawel Posted May 4, 2016 Author Share Posted May 4, 2016 @Deltakosh - Good link to start with. I was always confused by the public production arm of the US government whose initials are WETA. Weta Workshop was started by Richard Taylor and Peter Jackson in the late 80s, and Jamie Selkirk joined as the 3rd founder shortly following their first film "Meet the Feebles." It's a dirty puppet film, and worth watching - however, their next film "Brain Dead" is in my opinion, the best zombie movie ever made. But when they began working on "The Frightners" which is also a great film, they hired 8 animators working in what is now the kitchen of Weta Workshop. However following Frightners, they expanded their digital workshop and moved into the building next door, which now has grown into more than 500 artists working between Weta Workshop and Weta Digital full time, along with Weta Productions founded in 2005 for television production. Not to mention an entire film studio which size and function rivals any studio in Hollywood known as Stone Street. To date, Weta has now contributed to the visual effects and creative design for more than 100 movies, including Lord of the Rings, Avatar, Hobbit, King Kong, the Avengers, Narnia, and practically any film of good quality effects you've probably seen. And FYI - they beat ILM in many tests for film work, including Avatar - which the shots that came out of Avatar frm ILM were really bad before Jim was "encouraged" to switch to Weta for all design and productio - but that's another story. By the way, a Weta is one of the nastiest insects in the world, and I would be woken up at least once every couple of months with one of these very large nasty insects chewing on my arm or leg. Check out the pictures attached. @jessepmason - Thanks for the advice - I'm bypassing the USB for now as I am adapting an existing scene and app for a demo. But I do like the circle pointer combined with a bluetooth controller for picking elements and objects. @JCPalmer - excellent demo of the VR camera, and I would have never thought the issue would be picking a button on a portion of the text - especially since it works regardless of where I pick upon refresh for the first pick. Any reason why this is the case? It wuld be valuable to know why this is occuring. And for Magic Leap and rendering light fields, this is a task which is unbelievably difficult. I wouldn't expect babylon.js to integrate a light field renderer due to the obsene amount of computation, as well as the yet unsolved complexities. But since their busines is selling hardware and displays and not selling software, they are developing the renderer first, and then require an API (or compatability in format) for setting up and/or adapting environments/scenes to be processed by the renderer. The amount of processing is so labor intensive at this time that there is no way of currently rendering in real-time, but I might expect that processors might be fast enough in 2 years to potentially handle this, or make the rendering a fast post process. So I don't know if babylon is a potential choice for them in the future, as I'm simply trying to convince them that WebGL is not going away, and we can hopefully support WebGL frameworks such as babylon scenes, as they will find themselves supporting a proprietary API forever, when this is not their business model at all - except for setting up rendering. The person heading up the software and hardware development actually mentioned WebGL to me as a possibility first, almost 2 years ago now - but he's not yet convinced that this is the future for gaming and apps. Autodesk recently anounced and is now pouring millions upon millions into webGL, so there is a plausible future for babylon.js to play a role in the new frontier of natural 3D and 4D natural light displays. For more info on light field rendering, the following link will provide you with a small glimpse into what they are up against, and you'll see that for them it's all about the renderer in support of CGI scenes, and not the API or development framework - to a point. http://graphics.stanford.edu/projects/lightfield/ DB jessepmason 1 Quote Link to comment Share on other sites More sharing options...
MasterSplinter Posted May 4, 2016 Share Posted May 4, 2016 9 hours ago, JCPalmer said: What is weta, a public Television station in D.C.? Must be something else. Lol this made my day. :). Quote Link to comment Share on other sites More sharing options...
JCPalmer Posted May 4, 2016 Share Posted May 4, 2016 DB, This problem should be fixed. The problem with picking is the rules changed in 2.4. The Dialog extension is 100% webgl. The text is actually a merged MESH of Dialog.Letter mesh clones. The 2D & 3D font.js files are actually generated in Blender with Tower of Babel exporter. The text mesh is placed ever so slightly in front the "button" mesh so it will always be seen. In 2.4, you have to actively set a mesh.isPickable = false, if you do not want it to be chosen. In 2.3, even if you did not do this, the mesh also had to have an action assigned. The button mesh has the action. Not setting the text mesh as not pickable, now worked as a blocker mesh. The magic leap people probably need make a custom implementation of OpenGL, otherwise they are going to be responsible for the entire API, forever. It is going to be difficult to attract developers to a completely one-off API. If no one buys due to lack of games, it dies. Most of reverse ray tracing can be handled by the part that runs the vertex shader to place the fragments in the place they need to be. If I did not know that there was always going to be 2 displays, I would have thought OpenGL ES 3 (source for WebGL 2.0) would be a good choice. OpenGL though, can handle multiple color buffers, search GL_STEREO. OpenGL 4.x might be an API more worthy of their hardware. Multiple color buffers is not in the ES variant, which was cut down for mobile devices. There is also a glDrawBuffers() method, where you issue 1 command to draw in multiple buffers at the same time. I have never done this, but they would be foolish not 2 on check this. Jeff Quote Link to comment Share on other sites More sharing options...
JCPalmer Posted May 4, 2016 Share Posted May 4, 2016 Ok, I got user post processes working for all rigs (a rig camera cannot be marked as isIntermediate when there is also a user post process). The Black & White will work no matter what. Need a little more testing, before PR. The reason this is important is 3D rigs are implemented using post processes. If one feature cannibalizes another feature, then that feature is what you call 'shit'. Plan on playing tomorrow with my improved tester on my 3D TV. I have never really even tested over-under successfully. Android chrome full screen was not actually completely full screen last June, when I put this in. With a much better scene, should be able to make head way for settings for FOV & interaxial distance. BTW the VR rig does not change when these change. Is that supposed to happen? Quote Link to comment Share on other sites More sharing options...
dbawel Posted May 5, 2016 Author Share Posted May 5, 2016 @JCPalmerI'm very interested in your test and any findings you have once it appears to be working. As I'm certain you know, in the babylon-master file, Both lensSeparationDistance and FOV calculation is certainly addressed and configurable: true in VR Camera Mertics, but I haven't been able to test the VR rig as planed last weekend. As for the ML development group, FYI - the lead was a key developer of DirectX - whatever that's worth. I look forward to your test results. DB Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.