JCPalmer Posted July 4, 2018 Share Posted July 4, 2018 I am not sure there is really a need to be able to record (webm) when using the VR rig. One of the few 3D video on YouTube has gotten 2.8 million views, so I would definitely want to get some of that. Not seen any videos with a barrel distortion. I do not have the hardware. Can you even watch stuff with a "phone on your head"? I recall something during the winter Olympics. If trying to record with the VR rig is a waste of time, please let me know. If this is desirable output, basically nothing comes out when canvas.toDataURL('image/webp', quality); is called on Chrome when this rig is used. I suspect that it is due to the fact that each of the sub-cameras writes to it own viewport, and that is not taken into account. Really glad this would not work for the stereo rigs I added, or I might have screwed myself. When I tried the newer WEBVR rig, it did output, but only a single screen. Is the VR rig just around for backward compatibility? Might it be ok to mess with the old VR rig, maybe add another post process, if I can think of a way? FYI, @Wingnut & @Deltakosh, I cringe every time RIG_MODE_STEREOSCOPIC_SIDEBYSIDE_PARALLEL is mentioned as a solution when some device does not support webVR. The stereo rigs are interleafing, meaning that in the doubled dimension only every other pixel is printed. It gives each side a squished look, but on a 3D TV it widens out to the original dimensions. Something for these devices would need to crop the left & right side of each image to really work. Quote Link to comment Share on other sites More sharing options...
RaananW Posted July 5, 2018 Share Posted July 5, 2018 HI JC, camera rig is still being used in webvr. WenVR has two child-cameras, defined also as rig cameras (unlss it was internally changed?) About exporting the canvas, it is important to know when to run the canvas.toDataURL . I remember trying to use it as well, having a few problems with it because i called it inside the babylon before-render loop (obviously the wrong place). If you want to share a demo and show what's not working, maybe we can get it to work? Quote Link to comment Share on other sites More sharing options...
JCPalmer Posted July 5, 2018 Author Share Posted July 5, 2018 I think I'll postpone any demos till the PG can save them. I am using a scene level After-render. The stereo rigs also use a post process, and they work. That is why I zeroed in on the different part, the viewport feature. Quote Link to comment Share on other sites More sharing options...
Guest Posted July 5, 2018 Share Posted July 5, 2018 PG restored Quote Link to comment Share on other sites More sharing options...
JCPalmer Posted July 5, 2018 Author Share Posted July 5, 2018 Demo made. It only runs on Chrome. I tried to simplify to the maximum, so no webm is made, just a single 'image/webp', which is displayed on a new page. So you can see the source code, I commented out the after render registration. Upshot is the demo worked, which is pretty good to me. It did not solve the problem of why this rig is not generating, but I now know I was looking in the wrong place. Assumptions reset now in progress. FYI, if you replace the rig with RIG_MODE_STEREOSCOPIC_SIDEBYSIDE_PARALLEL, then an error is generated " e.StereoscopicInterlacePostProcess is not a constructor". This works in the last stable version, so the version in dev has broken it. Quote Link to comment Share on other sites More sharing options...
Guest Posted July 5, 2018 Share Posted July 5, 2018 Will fix the stereo issue Quote Link to comment Share on other sites More sharing options...
Guest Posted July 5, 2018 Share Posted July 5, 2018 Will be fixed with next commit JCPalmer 1 Quote Link to comment Share on other sites More sharing options...
JCPalmer Posted July 6, 2018 Author Share Posted July 6, 2018 Ok, I looked at the webm file through a webm viewer program. All the frames are there. The byte size of the frames is what I am expecting. I modified my test scene to do the same trick as the playground, namely take the last frame of the video, make a document out of it, & just write over the web page. It is coming out fine. Doing some completely random changing just to get something "to pop", I changed the scene clear color to grey from black, but when the video is played all the frames are still just Black. Got me to thinking though. Just what exactly IS that white stuff that surrounds? Is it even stuff? Can we make it actual stuff in dev to see if a webp image gets generated, so a webm video understands it? I can do a few more tests like look at the file through a hex file reader. Doing that helped me see that setting the quality to 1.0, changed the format from VP8 to VP8L. This did not work. Got all black there too. That is why my quality slider only goes to 0.99. Hmm? I'll look at this area again too. Quote Link to comment Share on other sites More sharing options...
Guest Posted July 6, 2018 Share Posted July 6, 2018 The white stuff is the clear color of the scene (used to clear the render texture used by the vr distortion post process) Do you want to change the color? (easy by setting the clear color of the render target) or something else? Quote Link to comment Share on other sites More sharing options...
JCPalmer Posted July 6, 2018 Author Share Posted July 6, 2018 As a test, yes, anything without any alpha would do. Can this be made in application code, or only in the framework? Quote Link to comment Share on other sites More sharing options...
JCPalmer Posted July 6, 2018 Author Share Posted July 6, 2018 Same question as before about app or framework code? FYI, in a WEBP image, there can be 3 different codecs, VP8, VP8L, & VP8X. VP8L is for lossless. That is what I get when I set the quality to 1.0. Not sure what VP8X is, but when I change to the VR rig, that is what is coming out, not 'VP8 ' like all the other rigs! One of these 3 strings are always found in the dataUrl. 8 bytes later the data starts. I am currently just checking for 'VP8' in the data. The WEBM video format takes 2 codecs 'VP8' & 'VP9'. Something is causing a 'VP8X' to be generated. At least now I think I know the reason the video is black. The border seems to be the only difference. Quote Link to comment Share on other sites More sharing options...
JCPalmer Posted July 6, 2018 Author Share Posted July 6, 2018 Hold on, something that just happened. I changed my test to find the start of data from 'VP8 ' to 'VP8 '. let keyframeStartIndex = webP.indexOf('VP8 '); The video in not now all black. The area in the background is black & jagged, but that's a start. I am really thinking this has to do with alpha of the clear color. My checks for 'VP8X' are still successful, so BOTH must be in the file when using VR rig. (Am going to have to check the quality 1.0 thing again too). Still similar question, can the alpha be taken out of clear, or is this also being used by the edges of what is rendered, so will not matter? Quote Link to comment Share on other sites More sharing options...
Guest Posted July 6, 2018 Share Posted July 6, 2018 Well there is no alpha (that I'm aware of here). Only a plain color is used: https://github.com/BabylonJS/Babylon.js/blob/master/src/PostProcess/babylon.postProcess.ts#L471 Quote Link to comment Share on other sites More sharing options...
JCPalmer Posted July 6, 2018 Author Share Posted July 6, 2018 I am going to do a quick test on chrome or Firefox using the HTML Canvas.captureStream() instead of toDataUrl(). I tried it earlier, but got strange results. If this method does give good results for the VR rig, I think I have found a way to get around the issue I have with this method. That issue is it is realtime-based. It is much faster than toDataUrl(), because it just passes a memory pointer of the canvas to a browser background thread. But, you cannot use it to directly render at a true, dependable, settable frame rate. And also, not to a frame rate might be greater than your scene can be rendered at a given resolution on your machine. An example is a complicated scene, with many meshes using 2 sub-cameras & postprocessing for 3D, say @ 1080 or Ultra-HD resolution. A think a commandline program ffmpeg has an option which allows you to over write whatever the capture said the time was with a fixed increment. Now you can capture at perfectly timed frames at given points in time, regardless of when they actually render. I need to merge the final consolidated audio file with the video file anyway. The option is: Quote -r[:stream_specifier] fps (input/output,per-stream) Set frame rate (Hz value, fraction or abbreviation). As an input option, ignore any timestamps stored in the file and instead generate timestamps assuming constant frame rate fps. This is not the same as the -framerate option used for some input formats like image2 or v4l2 (it used to be the same in older versions of FFmpeg). If in doubt use -framerate instead of the input option -r. Quote Link to comment Share on other sites More sharing options...
JCPalmer Posted July 7, 2018 Author Share Posted July 7, 2018 I came so close to getting a completely successful test of Canvas.captureStream on FireFox, . Whether on Chrome or Firefox, the VR rig worked fine: In either case though, you cannot specify a codec. Firefox puts out VP8, but chrome does not even put out a true WEBM file. It has an MP4 codec. The killer is you cannot set the size of the capture in code. It is whatever the physical size of the canvas is on the screen. It makes sense, but that is a problem which cannot really be overlooked. Am going to stick with toDataUrl() method, and table the VR rig for now, unless some knows how to size a physical canvas (probably need to create the canvas in code). I have a 30" high res display (2560 x 1600), so could not do UHD (3840 x 2160). Do not know if that is a real problem or just imaged. Code I use to size canvas: // make videos of an exact size, regardless if looks weird on screen function sizeSurface( width, height) { const canvas = engine.getRenderingCanvas(); canvas.width = width; canvas.height = height; // may not have auto resize; if it does no harm doing it again engine.setSize(width, height); }; Quote Link to comment Share on other sites More sharing options...
Guest Posted July 9, 2018 Share Posted July 9, 2018 Interesting study!! Can't remember if you tried the ScreenRecorder from WebRTC: https://www.webrtc-experiment.com/Pluginfree-Screen-Sharing/#5834900709824411 Quote Link to comment Share on other sites More sharing options...
JCPalmer Posted July 10, 2018 Author Share Posted July 10, 2018 Thanks, but no, I had not. I can say that I am all about control & that looks to have none (no frame rate, no quality, no resolution). I just completed using toDataUrl(). It is the only one in which you can control quality. In the 1.6 sec clip below, the .webm + .wav files combined size is a MASSIVE 8,858 kb. That is a lot for so small a clip, but when the multi-pass VP9 codec convert & sound track merge is done by ffmpeg, it is only 277 kb. As I am merging the consolidated soundtrack afterward anyway, giving ffmpeg the most crisp frames as a source to encode as VP9 or H264 is very desirable. It takes a lot of RAM, but I have 16 gb & room for 16 more. The annotations in the cropped black bars were supposed to be just a joke, but it is really helpful to bake settings right into the video during dev. You can easily mix your files without knowing. Am now starting to work on a clip with actual talking, work on recording code is done, unless I fine something. The alpha for VR is probably in the cameras, not background thinking about it. Going to throw VR under the Bus. Actually, YouTube can show 360 videos. Not going to attempt this right now, but wonder about having a rig with say 300 cameras & viewports. The VR distortion on the combined, probably wrong for this though. side-by-side-vp9.webm GameMonetize 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.