re_evolutn Posted November 6, 2016 Share Posted November 6, 2016 Hi there, I'm in the middle of creating a website that works with multiple control inputs including a virtual cursor for VR using VRDeviceOrientationFreeCamera. The method I'm currently using is a bit of a hack i think. I've created a long cylinder and attached it to the camera which i use to check for collisions and then fire the same functions I have in OnPointerOverTrigger, OnPointerOutTrigger .etc... Is this the best solution or is there a way to simulate a mouse pointer that is attached to a camera? cheers, Quote Link to comment Share on other sites More sharing options...
Wingnut Posted November 8, 2016 Share Posted November 8, 2016 Hiya @re_evolutn, welcome to the forum! I tried a playground search for "crosshair" and I found this playground. It uses a plane for a crosshair, and it seems to have mouseOver working... using a picking ray. Perhaps you can use this method. Hope this helps, and again, welcome. Quote Link to comment Share on other sites More sharing options...
re_evolutn Posted November 8, 2016 Author Share Posted November 8, 2016 That works perfectly, thanks Wingnut. Your searchfoo is much better than mine Wingnut 1 Quote Link to comment Share on other sites More sharing options...
Wingnut Posted November 8, 2016 Share Posted November 8, 2016 My pleasure. I'd like to pass a "thanks" to the original coder of that playground, whomever he/she be. It works great. BJS picking rays work great. Rays are pretty good at getting distances. I wonder if they could be used as a 3D scanner of mesh? We have that skull model thing... http://playground.babylonjs.com/#LNEX4 (allow some time for skull model to load) I wonder... if "sweeping" a pickingRay across the surface of the skull... is possible. As we sweep, we gather distance data into a buffer, and then use that buffer as a heightMap or displaceMap. Weird. Sorry, I wandered-off, mentally, for a moment. Quote Link to comment Share on other sites More sharing options...
re_evolutn Posted November 10, 2016 Author Share Posted November 10, 2016 Would you also use pickingRay to read the RGB value of the current mesh? (the inside of a sphere with a texture for example) - could be useful to translate into something like a temperature readout on that texture. Quote Link to comment Share on other sites More sharing options...
Wingnut Posted November 10, 2016 Share Posted November 10, 2016 Hi again! That's a great question/topic! http://www.babylonjs-playground.com/#11WQKN#1 Here, even though I have no texture on the "wall" (the target), line 33 is reporting pickingInfo.textureCoordinates to console. The numbers are the fractional distance between 0, 0 in the lower left corner... and 1,1 in the upper right corner (of the no-texture texture on the wall plane) (huh?). Programmers can easily convert those numbers to "percentage of up-ness" and "percentage of right-ness". I didn't even go that far. I just sent the raw coordinates to the console. (I'm lazy) Your question is so good, it should probably be a new forum topic. Possibly titled... 'Eyedropper - Get color under picked point'. If you DO start this new topic, possibly include 'eyedropper' and 'colorUnderPointer' as tags, if you please. I searched forum and playgrounds... finding no information about this. It is a worthy subject to discuss further, probably in a separate thread. POSSIBLY... the way to do this... is create a renderTargetTexture (rtt) from the current camera view. RTT's are like a camera view... that can be used as a texture. For example, below is a playground with 4 RTT's from 4 different cameras.... textured onto 4 planes... that are parented to the camera (so they stay in the same place). http://www.babylonjs-playground.com/#1WROZH#6 I know only one way to get RGB data from an image/texture. Paint/Put the image (our RTT) into a context2D canvas, and then use the context2D's pixel manipulation features. The getImageData function allows us to retrieve imageData.data from a (tiny) rectangular area. We would want to position that tiny rectangular area... a certain distance upward and right-ward... into the canvas. Hey, we HAVE up-ness (Y-amount) and right-ness (X-amount) values... from our pickingInfo.textureCoordinates, yes? I see hope! Note: When I mention "canvas" above and below, I do NOT mean the renderCanvas for the entire scene. I mean a separate HTML canvas where we "put" a renderTargetTexture (RTT) gotten from our primary camera. If we CAN get imageData.data from the tiny rectangle... which is position X/Y into the context2D canvas... then perhaps we will be farting through silk (pardon my Klingon dialect). We could be successful with our "eyedropper" color getter. According to some document somewhere... imageData.data contains "a Uint8ClampedArray representing a one-dimensional array containing color data in RGBA order, with integer values between 0 and 255 (inclusive)." Sounds like something usable, eh? What a complicated procedure, though. erf. Let's wait for more comments, and then perhaps start a new topic about this... a bit later. Does that sound like a good plan? Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.