Wingnut Posted July 8, 2018 Author Share Posted July 8, 2018 Yep, well said. I hope I never implied that you were a bad person or that you had done evil. I think you are a good person. I only had/have issue with the "sound" and "feel" of that first reply. It feels "hope killing". You know... rock in trail, idea dead, no hope for work-arounds, end of discussion, cope and deal. You know about hippies like me. We hug everything, even the technically deluded/confused. Pryme8 1 Quote Link to comment Share on other sites More sharing options...
Wingnut Posted July 22, 2018 Author Share Posted July 22, 2018 Hi kids. https://www.babylonjs-playground.com/#KX33X8#40 Doing a little experimenting with the grid system... HUD-like stuff. Cells surround the canvas area... fullscreen ADT. I'm making sure I have good click-thru the center "viewing area". Checking the stretching and squishing of the controls... when I slide the PG vertical divider, or when I open my dev tools and slide its horizontal slider. Thinking about colorPicker's and other controls that look best if they stay square or round... non-stretched. I wonder what screens/gui like this... look-like on cellphones, tablets, and VR headsets. Buttons too small to use/read? If I had 4 numeric/text "readouts" in a single cell, are those numbers too small to read... on a cellphone? Do virtual joysticks/glove-cursors provided by VR gear or by tablet apps, get pasted atop these buttons? Are there joysticks/widgets even provided by the device? Some so, some not? Can device/BJS virtual joystick area... be limited to ONLY inside the "viewing area" of this gui layout? You know, things like that. Thinkin'. :) Big picture: Wondering HOW WELL.... can a desktop app... also be a tablet/cellphone app... and also be a VR headgear app... all with the same GUI layout. Info welcome... I have only a desktop here. Quote Link to comment Share on other sites More sharing options...
brianzinn Posted July 22, 2018 Share Posted July 22, 2018 Good questions. I'll be a bit evil and throw in the first wrench... 1 hour ago, Wingnut said: ... and also be a VR headgear app... The full screen GUI stuff: CreateFullscreenUI() That's not going to work in VR, but that doesn't mean they can't share the same GUI layout. The GUI texture just needs to be attached to a mesh that is part of a 3D scene. I started my project in DOM and had to redo large portions of the GUI to bring them into the canvas... and then I had to redo the entire scene layout to have these "always available" buttons that weren't too obtrusive for my 3d level builder. So, I've struggled there. One way that works if you have a controller is to attach the GUI mesh to the camera. Then the meshes follow you when you look around and you can click on the side. Falls apart on something like a cardboard, because when you look towards the button it keeps moving Fun thought experiment - I would try to think about my targets that I want the GUI layout to work in and then find out for each. Look at the grid layout on bootstrap - it does CSS media queries to move things around. Like a hamburger menu on a mobile phone and full menu on desktop. This you can simulate on a desktop by resizing the page. I've made UI changes that look great on desktop and then are unreadable on my S8, which has a really high pixel count for a phone. Looking forward to see where this goes! Wingnut 1 Quote Link to comment Share on other sites More sharing options...
Wingnut Posted July 22, 2018 Author Share Posted July 22, 2018 Thx @brianzinn. 3 hours ago, brianzinn said: CreateFullscreenUI() That's not going to work in VR Can you hand-hold me a bit further, and tell me why? Panorama issues? No pointer? I've never looked thru a rift/vive/psvr/osvr. No opportunity do that around me... except maybe over at the tech college. https://www.babylonjs-playground.com/#KX33X8#41 Press goggles button. Is that what it would look like? Impossible to select a button, and we'd get brain tumors trying to get it into depth-focus? Impossible to read any numbers/labels from the controls? Thx for all that other good info, too. The project is a spacecraft... a precision flyer and precise space-docking vehicle. It could have/use tons of buttons and readouts, but it leaves the realm of "shoot 'em-up" very quickly... due to the many hot-keys that COULD go active on a desktop keyboard. To operate a game like this... with an Oculus or cardboard, or cellphone/tablet... would probably require no more than 8 big buttons on screen at once, and then drill, drill, drill thru menu levels... until the correct big slider was on-screen and thumb-slide-able. In general, it seems like cells and tablets need BIG GUI... and even if the GUI virtual keyboard could be turned-on... thumbs aren't small enough to operate it (on cellphones and small tablets). GUI on-a-plane for VR headsets... that's not a big deal. But same issue... if you need 20 buttons/sliders on that spacecraft dashboard, can the "cursor" or pointer in a VR headset... be accurate enough to press small buttons, or clearly read their labels? At a certain point, a guy has to think about keeping the buttons bigger, and having 4-different menu panels (each with less/bigger controls/readouts). And then... we start to think about "GUI Managers" which could possibly sense what type of device is being used... and build the GUI specifically FOR that type of device. Game play can be affected.. by the type of device, which is something game programmers probably DON'T want. If player needs to click the "select dashboard" button 3 times to focus the control/slider that they need to adjust, that gives the enemy/docking-port another 3-5 seconds of approach time. Device-sensing GUI. erf. In the old days, we ONLY needed to do browser-sensing. Now we need to determine if the player is playing the game on a coffeemaker or walkie-talkie. brianzinn 1 Quote Link to comment Share on other sites More sharing options...
brianzinn Posted July 22, 2018 Share Posted July 22, 2018 1 hour ago, Wingnut said: Can you hand-hold me a bit further, and tell me why? It's buried in the docs and certainly had me confused in the beginning. https://doc.babylonjs.com/how_to/gui The fullscreen mode is not intented to be used with WebVR as it is a pure 2d rendering. For WebVR scenario you will have to use the texture mode. That took me a while to figure out how to work around that. That's why I was just saying that it's good to reduce your target devices For your spaceship - if you have the perspective from inside the spaceship - then you can attach the meshes/planes to the camera. - if it's read-only gauges especially. If re-usable controls are created then they will be super useful to a lot of people. Here is a menu: From your PGs and some other ones it looks like we are making progress for scrollbars. I ended up abandoning cardboard support. I may revisit - technically there is a button that clicks part of the screen Maybe just build your game for the desktop and then revisit? That's what I did. It ended up being less work in the end as GUI had come much further!! It's getting better every release. Wingnut 1 Quote Link to comment Share on other sites More sharing options...
Guest Posted July 23, 2018 Share Posted July 23, 2018 Please send PR to help me with the doc guys. It is a titanic work to make it work for everyone Quote Link to comment Share on other sites More sharing options...
brianzinn Posted July 23, 2018 Share Posted July 23, 2018 3 hours ago, Deltakosh said: Please send PR to help me with the doc guys. It's already in the docs that FullScreenUI is not for VR, so I think it's good. Just trying to help out wingnut that what he was up to a very big task... and to maybe reduce the target devices to start On 7/22/2018 at 9:30 AM, Wingnut said: ... and also be a VR headgear app... all with the same GUI layout. Quote Link to comment Share on other sites More sharing options...
Wingnut Posted July 23, 2018 Author Share Posted July 23, 2018 Good info again, thx! 21 hours ago, brianzinn said: if it's read-only gauges especially *nod* Probably switches and sliders will be involved, too... and their labels as well. I suppose there are no alt, shift, or control buttons along the edge of cellphones and tablets (touch-screen devices), so alt-touch, shift-touch, and control-touch are not "normal"... and have to be manually programmed... using three pixel-based GUI buttons, somehow within reach of left/both thumbs. Maybe these three "bucky" buttons don't appear unless both virtual joysticks are NOT being displayed. Control/Alt/Shift-touch would give us a few easy "bring up a menu" or "change camera/view" possibilities. I'm thinkin'... what if the dashboard is in 8 separate sections, thus allowing the controls to be bigger, and easier for touch. Left thumb holds down BJS GUI control button at bottom of screen, then keep touching the screen anywhere... with the right thumb... until the wanted dashboard section is being displayed. Once a section of dashboard is displayed, virtual joysticks (flight controls) are off, and can't turn-on. Then, both thumbs work on the dashboard buttons and sliders (a different kind of virtual joystick/crosshairs/reticle activates - panel-nav mode). Thinkin' and talkin'. How complex of a gui... can be controlled with two thumbs on a tablet/phone touchscreen? And then I think about... what is a computer and what is a display device? All these things are blurring together... both literally and figuratively. I need to read. Does a Vive have its own computer? Does it need at least a cellphone? Can a tablet be used as a controller for a VR headset? Are VR headsets and handcontrollers... meant to go mobile? I can't keep up! I don't expect anyone to teach me "what's shakin' in the tech world" here in TWC. Brian knows I was dreaming of "GUI modes"... easy switching from VR mode, to touchpad mode, to desktop mode.... and it all works right or good enough. Sounds like a big task, though. Probably impossible. Generally speaking, it seems desktop/mouse/keyboard can do tiny things, but VR and touch-screen mobiles... need bigger controls and different ways to nav them. It's a weird time in tech history, huh? I bet the BIG MONEY is in batteries, eh? Quote Link to comment Share on other sites More sharing options...
brianzinn Posted July 23, 2018 Share Posted July 23, 2018 1 hour ago, Wingnut said: I can't keep up! You are not alone. It's a fast moving target. Some VR headsets you just pop in a phone, so are definitely mobile (phone is split-screen 1/2 for each eye). They have controllers (ie: Daydream, GearVR). The Oculus Go is a standalone VR headset (no phone). The higher end ones connect to a computer and have their own screens, some need a good video card. 1 hour ago, Wingnut said: Can a tablet be used as a controller for a VR headset? I haven't seen that done, but you could do that with WebRTC. I think lag will be an issue. It's not like you are opening a Pandora's Box - just that there is a bit of a learning curve. Helps a lot to visualize even with a $20 cardboard, but quickly you will probably want something more... nothing is impossible - just depends on how much time you want to invest! In VR you can't even touch the screen, which is why I was thinking you may just want to skip VR at least to start Wingnut 1 Quote Link to comment Share on other sites More sharing options...
Wingnut Posted July 24, 2018 Author Share Posted July 24, 2018 *nod* 7 hours ago, brianzinn said: In VR you can't even touch the screen Yeah, but they often have air-mice or finger stylus hand controllers, right? Or, always? Sorry for so many questions. When a button goes down on a VR hand controller... is it read as a touch... back at DOM-events-level/gestures-level? I guess, even touch-events... are bound-to standard mouse-like button-down, at some point. I hate to see apps being custom-made for the type of input controller(s) and type of output display. It handcuffs the app/game programmers to a degree. But yeah, it seems that VR is the first type to (temporarily) leave behind - probably lowest priority, as you said. Operating an OS such as Windows... while wearing VR gear... COULD "take off" in the future, I suppose. IF it does, one would think that big dogs will put lots of work into nicely converting current Windows GUI... into something VR-tolerable. Maybe that happens at the video cards/GPU, maybe at DirectX, maybe both. 3D depth is a brand new thing for GUI-makers. I suppose our new GUI 3D is an "early taste" of what that world might look like There's a link in the 3D GUI docs... to the git repo for the MS Mixed Reality Tool Kit. (Obviously, they blatantly copied BJS ActionManagers) Interesting stuff. Stereoscopics. Anaglyph. How light-weight can it be done? Can eye-gear be so lightweight and comfortable... that most people will start wearing them... just to use Windows on an average desktop? Perhaps ONLY for graphics programs in the early years, because there are so few places to use "depth" within current Windows OS? Maybe MS would need to code a "depthful" version of Windows... before all-day-use of stereoscopics on Windows... would even be tried. heh. So messy. Bleeding edge never has a clear publicly-published game plan. brianzinn 1 Quote Link to comment Share on other sites More sharing options...
brianzinn Posted July 24, 2018 Share Posted July 24, 2018 39 minutes ago, Wingnut said: Or, always? Basically yes. The headsets typically have a controller. The outlier is Cardboard, which ironically has a button that touches the screen... but I don't think that is setup in the Experience Helper (or the rotate head to go back), so last I checked you were relying on 'gaze'. Look at an object for a duration to click/select it. Yes, those button clicks with controllers are registered automatically in GUI. If you point at a GUI button with a controller, it is clicked Wingnut 1 Quote Link to comment Share on other sites More sharing options...
Wingnut Posted July 25, 2018 Author Share Posted July 25, 2018 Gaze?! heh. Wild. How the hell does THAT work? (actually, I shouldn't have asked that.) Speaking of eye tracking: https://www.roadtovr.com/why-eye-tracking-is-a-game-changer-for-vr-headsets-virtual-reality/ And sound waves, too... find your finger in 3d space... via doppler! Could also watch for thunder-storms forming in your computer room! Ok, leaving VR behind for now... we're back to cells and tablets. I saw a tablet 2 years ago... that was almost as big as a laptop screen! Some richer-than-I was using it to get LIVE track-side performance data... transmitted from a snowmobile that was currently racing in a pro race. The guy was obviously a sponsor/owner of that snowmobile racing team. He pulled this battery-eating electronic etch-o-sketch out of his very-sponsor-decorated snowsuit, tapped the screen a few times, and the screen filled with live-updating snowmobile statistics/numbers... from the racer's sled. Too cool. Side-story... not really pertinent... but drool-worthy. I guess one should ask... when does a two-thumbs tablet/cellp... go beyond 2-thumb land? Can we, and should we, as GUI layout folk... programmatically CHECK... if/when a display screen is too wide for 2-thumbing (and probably uses finger-touching instead)? 2-thumbing screens are probably cellphones... but we "real men" have hands big enough to do 2-thumbing on BIG cellphones, too, and small tablets. (and in the screen-size gray-area where we can't determine if it is a tablet or a cellphone). What COULD be a 2-thumb device for a pro basketball player... might not AT ALL be a 2-thumber for an 8 year old girl. But still, the amount/size of GUI controls... seems very dependent upon... whether the device is being 2-thumbed or whether it is being finger-touched. For example: That PG... top and bottom rows are 6 buttons wide... about 16.69% of canvas width per button. Is that GUI too small for a last-generation cellphone screen being driven with 2-thumbing? Are those buttons too small for thumbs (during hot game-play)? "Are you 2-thumbing?" "Are you finger/stylus-touch?" "Are you desktop?" Are these questions that we could/should ask players, at run-time? hmm. I guess there's such a thing as 1-thumbing, too... but I'm not sure if anyone plays webGL games, using 1 thumb. Maybe webGL tools/apps, though. The device needs a small screen (actually a small 3d canvas) to cover all touch-areas... with a single thumb. But, pocket size IS important. Tiny screens aren't going to die anytime soon... but... I doubt that many people will be playing BJS Space Warriors on such devices. Wait till the teen girls invent e-Lockettes... necklace/bracelet-wearable tablets... with tiny tiny screens and memory that contains all their most precious pictures, thoughts, and feelings. heh. Fun! E-Lockettes also can contain personal medical data, usable by EMT's/hospitals. Opinions/thoughts, anyone? (thx). Quote Link to comment Share on other sites More sharing options...
brianzinn Posted July 25, 2018 Share Posted July 25, 2018 It does seem like phones are getting bigger rather than smaller, but probably this is a max., so we can still fit it in our pockets. Does not seem to be cool to have a tiny phone like in the 90's!! Maybe there will be a phone that folds out and doubles the screen I wrote a touch-screen UI and custom controls (not webgl) for people who would often be wearing gloves. There were treeview controls, giant scrollbars, etc. Everything was huge. I think in the end it comes down to pixels/devicePixelRatio and screen real estate. Dynamic GUI generation is a very cool and re-usable concept, especially 2D <--> 3D. I wish I had time to run with you on this and make some cool PGs! I am maxxed out for next 6-7 weeks. Wingnut 1 Quote Link to comment Share on other sites More sharing options...
Wingnut Posted July 26, 2018 Author Share Posted July 26, 2018 Hey, don't overwork, now. Thx for the conversation... and LOTS of info. You cleaned me up quite a bit, BZ... I appreciate that. Maybe others will give their opinions and trend predictions, too. I liked your glove-GUI example. Yep, that same "bigness" will be needed when we start doing tablets-for-toddlers. And touchscreen devices for drummers... they will need to be STRONG! I imagine black ops military folk... test glove-driven GUI at times. But maybe soldiers don't carry computers... and are only sent display data/computed results. I have a couple of "clip sport" MP3 players in my life, and I guess that's the smallest screen/gui I have seen - mediocre rez. But as far as readable fonts at tiny sizes, I've seen some beautiful phone-pad things in the hands of my tech-consumer friends. A solid 20 icons, nicely finger-spaced, on their pad-top screens, and I could easily read the tiny text under each icon. Nice. Quote Link to comment Share on other sites More sharing options...
brianzinn Posted July 26, 2018 Share Posted July 26, 2018 8 hours ago, Wingnut said: military folk... test glove-driven GUI You are pretty spot on there. They are designed for in-vehicle, though. Right about soldiers too, as they were using head mounted displays - the brightness of a computer screen can be a dangerous thing.... Quote Link to comment Share on other sites More sharing options...
Wingnut Posted July 29, 2018 Author Share Posted July 29, 2018 Yeah, brightness... you bet. Remember the cone-of-silence from the Get Smart franchise? Those soldiers need a cone-of-darkness to compute-within. Interesting! I recently did some studying on screen PPI/DPI... pixels per inch. http://dpi.lv/ is a real nice ads-free site that has a scrolling table of screens/DPI's mid-page, and introduced me to a thing called... "DPPX - Dots Per Pixel (dppx) - the amount of device pixels per CSS pixel" and a WIKI page. Here's a testing page I found... while searching for DPPX... http://mediaqueriestest.com/ I browsed it with IE and FF... interesting numbers. Trying to determine... when/not your BJS game/controls/gui... is 2-thumb-able, is going to be a challenge. DPI threw a new variable into the mess. This variable almost seems to aim us... at asking the user to grasp their landscape-oriented device with both hands, and reach sideways with each thumb... a comfortable maximum amount... and touch the screen. Establish a "comfortable thumb-reach zone" (CTRZ). Then, we restrict our virtual joysticks to stay within the CTRZ, and when we popup menus, we do the same... building them at locations within the CTRZ areas. This might only be important on big tablets and detach'n'go laptop screens... etch-o-sketch sized and bigger. To define a CTRZ, you grab the landscape-oriented tablet with both hands, and do half-moon swipes across the touch surface with each thumb. After that... we build the BJS GUI and virtual joysticks and drag-range system... for 2-thumb control. We keep it all within the CTRZ zones of screen-space. This system... sort of allows us to pre-test-customize our games/gui for child thumbs, or basketball player thumbs (a massive difference in both thumbprint area and CTRZ areas... for those two thumb-size extremes). We measure the user's thumbs, their renderCanvas dims, and their DPI... before we build the GUI menus and drag-puck (VJ) controllers. Phew. Most times, VJ's are used for driving around in a VR scene, and we don't care how far the users thumbs can reach. Generally, if a thumb is dragging, we're either translating or rotating the camera/player. In this case, the only adjustment needed for varying thumb-sizes... might be VJ sensitivity (sensibility). I'm more interested-in... where to put buttons... such as on-screen SHIFT, CONTROL, and ALT (and maybe META) buttons. I sometimes call them "buckies"... an old Stanford Univ term. Easier to say. For my project, idea, etc... shifted, control, and alt... touch/thumb-dragging... is useful, in theory. It allows the game to enter different modes. For example... a shifted-touch/drag... means we're selecting/moving scene nodes (and not in cam-nav mode anymore). And, alt-touch brings up a menu/dialog... maybe drag-position-able... and certainly choice-selectable (turning OFF nav mode and mesh mode, of course). It would be nice... to keep the shift, control, alt, and meta keys (buckies)... within thumb reach... no matter the thumb size. And, there's always the possibility of letting the user drag the 4 GUI-made bucky-buttons to any place/position they wish. This would probably require that these 4 buttons have their own private GUI container/host/root... but no big deal. hmm. Touch-dragging GUI controls. hmm. Has anyone tried that yet? ADT.update() constantly-running-during-drag... within onPointerMove(). Phew. Gruesome. I hear that the Measurement Engineers that work at GUI.Control.Container... HATE continuous updating. heh. Only 4 buttons on a single full-screen ADT... but... let's try dragging them around. Fun challenge. hmm. 13 cents and my left sock to the first person who drags-around the position of 4 gui buttons (full screen). Extra credit for disallowing button overlap. Smarter people than I... wouldn't drag the actual GUI control to the new location. They drag/drop a "proxy widget" to some screen location, and then the actual button "snaps" to that location after the widget drop. It can be done as simply as changing the CSS :cursor type to "I'm holding something"-shape... during the drag. Cheating. :) brianzinn 1 Quote Link to comment Share on other sites More sharing options...
Wingnut Posted August 1, 2018 Author Share Posted August 1, 2018 Hi kids. https://www.babylonjs-playground.com/#KX33X8#45 A little GUI control dragging... version 1.0. Just goofy. (Gotta stay within the blue rectangle.) Drag happens in lines 290 - 362 area. No proxy drag-widgets or CSS drag-grab cursors, used here. Genuine, pure, Rocky Mountain GUI button.top and button.left hacking. Gruesome and fun. One minor issue. It seems... whenever you "peek" a control's .top or .left, it returns a string... "xxx.px". (even if you never set it to a string). So... button.left+=diffX and button.top += diffY don't work. I think it fails because I am trying to add a float to a string. This is the reason why lines 342 and 343 are needed. They are float versions of .left and .top. Still testing, and this situation might be normal. Perhaps the getters for control.left & control.top will always return strings... no matter what. Maybe it HAS TO. *shrug* More tests and docs-reading ahead. Maybe some .typescript diving. Party on! GameMonetize 1 Quote Link to comment Share on other sites More sharing options...
DylanD Posted August 3, 2018 Share Posted August 3, 2018 On 7/31/2018 at 8:52 PM, Wingnut said: Hi kids. https://www.babylonjs-playground.com/#KX33X8#45 A little GUI control dragging... version 1.0. Just goofy. (Gotta stay within the blue rectangle.) Drag happens in lines 290 - 362 area. No proxy drag-widgets or CSS drag-grab cursors, used here. Genuine, pure, Rocky Mountain GUI button.top and button.left hacking. Gruesome and fun. One minor issue. It seems... whenever you "peek" a control's .top or .left, it returns a string... "xxx.px". (even if you never set it to a string). So... button.left+=diffX and button.top += diffY don't work. I think it fails because I am trying to add a float to a string. This is the reason why lines 342 and 343 are needed. They are float versions of .left and .top. Still testing, and this situation might be normal. Perhaps the getters for control.left & control.top will always return strings... no matter what. Maybe it HAS TO. *shrug* More tests and docs-reading ahead. Maybe some .typescript diving. Party on! Im very interested in how does it have to stay in the blue box, I would like to implement this with a scrolling menu, but couldn't figure this out. What I mean is I want to scroll a menu and if the menu goes outside the blue box it is hidden, back in the blue box and it is shown again. How did you do that... what line should I look at? Quote Link to comment Share on other sites More sharing options...
Wingnut Posted August 3, 2018 Author Share Posted August 3, 2018 Hi D. If I understand you correctly, you are talking about putting a container that is big (lots of menu items)... into a container that is smaller (so that the entire menu cannot be seen... just a little 'viewport window'). After you do that, you can adjust the .left and .top of the big menu container... to slide-it-around within the little viewport container. Keep in mind... that the ORDER in which you add controls and containers... matters. That determines which controls have higher/lower z-layering priorities. I have a new playground to introduce... sort-of experimenting with lower-half-of-screen GUI-based virtual joysticks/thumb-drag pads - discussed earlier with @MackeyK24. Lets look. https://www.babylonjs-playground.com/#KX33X8#48 Click once in the blue and red dragging areas... to activate their dragging-pucks/pads/buttons. Button-up, and then down again, and you can drag them. Numbers at console. Notice how they can drag a bit OUTSIDE-OF their drag areas... partially out of view? That movement is done with dragpuck.top and .left... and the reason they seem to "hide" when they go outside the drag area... is because of their addControl ORDER. I think order of adding... is somehow important, there. So, really, I don't have any answers... other than... it just happened that way. The areas were added first, and then the pucks are added to the areas. That COULD be important... to determine WHEN something is hidden behind something else. I hope I have been somewhat helpful. I kind-of new at this stuff, and it surprises me fairly often. Part of the fun! As we learn more, we can all add notes to the docs... little hints and secrets that we learn. A bit more: Dragging an entire container (your menu) that is full-of click-active controls... could be a bummer. You might need to leave a gap between each of your menu controls... where you can go pointer-down on the menu's CONTAINER and not accidentally go button-down on one of the menu controls. Once you can get your menu CONTAINER pointerDownObservable to trigger, then you can drag IT around (with menu container's .onPointerMoveObservable) ...within the smaller viewport container. That should move the entire menu. Don't confuse the menu container (holds all menu controls)... with the "little viewport" container which the menu container is dragged-around-within. You will be dragging the menu container. The little viewport container will remain stationary. Quote Link to comment Share on other sites More sharing options...
DylanD Posted August 3, 2018 Share Posted August 3, 2018 23 minutes ago, Wingnut said: Hi D. If I understand you correctly, you are talking about putting a container that is big (lots of menu items)... into a container that is smaller (so that the entire menu cannot be seen... just a little 'viewport window'). After you do that, you can adjust the .left and .top of the big menu container... to slide-it-around within the little viewport container. Keep in mind... that the ORDER in which you add controls and containers... matters. That determines which controls have higher/lower z-layering priorities. I have a new playground to introduce... sort-of experimenting with lower-half-of-screen GUI-based virtual joysticks/thumb-drag pads - discussed earlier with @MackeyK24. Lets look. https://www.babylonjs-playground.com/#KX33X8#48 Click once in the blue and red dragging areas... to activate their dragging-pucks/pads/buttons. Button-up, and then down again, and you can drag them. Numbers at console. Notice how they can drag a bit OUTSIDE-OF their drag areas... partially out of view? That movement is done with dragpuck.top and .left... and the reason they seem to "hide" when they go outside the drag area... is because of their addControl ORDER. I think order of adding... is somehow important, there. So, really, I don't have any answers... other than... it just happened that way. The areas were added first, and then the pucks are added to the areas. That COULD be important... to determine WHEN something is hidden behind something else. I hope I have been somewhat helpful. I kind-of new at this stuff, and it surprises me fairly often. Part of the fun! As we learn more, we can all add notes to the docs... little hints and secrets that we learn. A bit more: Dragging an entire container (your menu) that is full-of click-active controls... could be a bummer. You might need to leave a gap between each of your menu controls... where you can go pointer-down on the menu's CONTAINER and not accidentally go button-down on one of the menu controls. Once you can get your menu CONTAINER pointerDownObservable to trigger, then you can drag IT around (with menu container's .onPointerMoveObservable) ...within the smaller viewport container. That should move the entire menu. Don't confuse the menu container (holds all menu controls)... with the "little viewport" container which the menu container is dragged-around-within. You will be dragging the menu container. The little viewport container will remain stationary. Wow thank you this response is great. I think I can do it now. I don't need the drag part I'm just going to use a slider(I think) So just to make sure I understand you have a panel(or something along the lines of a panel) that holds the buttons that it behind the blue square. That way the button is seen but only behind the square Quote Link to comment Share on other sites More sharing options...
Wingnut Posted August 5, 2018 Author Share Posted August 5, 2018 Hi gang. Fresh left virtual joystick testings... for touch or mouse-drag (I hope) https://www.babylonjs-playground.com/#KX33X8#53 Click in blue box and start dragging immediately. A lot of observer code was removed from the "puck" and put onto the VJ's "surface" observers, instead. Mainly, this is because... when a HELD pointerDown observation happens upon VJ surface or upon a button/rectangle... no pointerMOVE observing can happen. We are sort of stuck in a pointerDOWN (which turns the puck ON). SO, making the puck turn ON, and instantly start being draggable (WITHOUT first lifting the initial buttonDOWN)... is pretty difficult. Also, lines 75/76... attaching the camera and/or setting VjSurface isPointerBlocker=true... can screw things up. Currently, in FF, it works pretty good. You can start dragging the puck after first pointerDown (which turns it on). Any pointerUP, after dragging or not, turns puck off. Currently, you can't pass clicks thru the blue-border VJ surface... which worries me. Clicking-of other (z-deeper) GUI or actionManager-ized mesh... thru the VjSurface... is NOT working well, yet. hmm. Allowing thru-the-surface clicking... changes the behavior/functionality of the VJ (so far). But... it might be possible/tolerable to toggle VJ surfaces/rects... ON/OFF in scenes... as needed. Unfortunately, ya need a button to do that... on tablets/phones with no keyboards. C'mon phone/tablet-manufacturers... I NEED a physical meta/control button... on the device case!!! I hereby make this a new world law. heh. Good fun, all in all. Learning learning learning. Branches and experiments welcome. Anyone who wants to corify this (add to GUI controls library)... go for it, please! But, first it needs clean-up and... well... there's still issues... like pointer blockers and attached-to-scene cameras. Oh yeah, and the "no pointerMove observing allowed... while stuck in pointerDown observation" -issue. *sigh* It almost feels like I need an onPointerDownAndMove observer... different/separate from standard onPointerDown and onPointerMove. Originally, this was all started to allow GUI buttons to be dragged around on-screen... until they are within comfortable thumb-reach. That is a slightly different application... than is virtual joysticks. Buttons must stay-ON after drag-into-position, and they must still work as buttons (a switch from surface pointerDown... to puck pointerDown). Erf. My brain hurts a bit, but it's a good hurt. update: PG #55 is a 2-VJ version. Both VJ's are hard-wired to some purple box properties, for testing. If someone would kindly test this on a touchscreen tablet/phone... and make sure both VJ's work AT ALL, and work simultaneously (do some 2-thumb-ing)... that would be swell. thx! Maybe I need a PG that forces full-screen canvas... at run time? Quote Link to comment Share on other sites More sharing options...
DylanD Posted August 6, 2018 Share Posted August 6, 2018 21 hours ago, Wingnut said: Hi gang. Fresh left virtual joystick testings... for touch or mouse-drag (I hope) https://www.babylonjs-playground.com/#KX33X8#53 Click in blue box and start dragging immediately. A lot of observer code was removed from the "puck" and put onto the VJ's "surface" observers, instead. Mainly, this is because... when a HELD pointerDown observation happens upon VJ surface or upon a button/rectangle... no pointerMOVE observing can happen. We are sort of stuck in a pointerDOWN (which turns the puck ON). SO, making the puck turn ON, and instantly start being draggable (WITHOUT first lifting the initial buttonDOWN)... is pretty difficult. Also, lines 75/76... attaching the camera and/or setting VjSurface isPointerBlocker=true... can screw things up. Currently, in FF, it works pretty good. You can start dragging the puck after first pointerDown (which turns it on). Any pointerUP, after dragging or not, turns puck off. Currently, you can't pass clicks thru the blue-border VJ surface... which worries me. Clicking-of other (z-deeper) GUI or actionManager-ized mesh... thru the VjSurface... is NOT working well, yet. hmm. Allowing thru-the-surface clicking... changes the behavior/functionality of the VJ (so far). But... it might be possible/tolerable to toggle VJ surfaces/rects... ON/OFF in scenes... as needed. Unfortunately, ya need a button to do that... on tablets/phones with no keyboards. C'mon phone/tablet-manufacturers... I NEED a physical meta/control button... on the device case!!! I hereby make this a new world law. heh. Good fun, all in all. Learning learning learning. Branches and experiments welcome. Anyone who wants to corify this (add to GUI controls library)... go for it, please! But, first it needs clean-up and... well... there's still issues... like pointer blockers and attached-to-scene cameras. Oh yeah, and the "no pointerMove observing allowed... while stuck in pointerDown observation" -issue. *sigh* It almost feels like I need an onPointerDownAndMove observer... different/separate from standard onPointerDown and onPointerMove. Originally, this was all started to allow GUI buttons to be dragged around on-screen... until they are within comfortable thumb-reach. That is a slightly different application... than is virtual joysticks. Buttons must stay-ON after drag-into-position, and they must still work as buttons (a switch from surface pointerDown... to puck pointerDown). Erf. My brain hurts a bit, but it's a good hurt. update: PG #55 is a 2-VJ version. Both VJ's are hard-wired to some purple box properties, for testing. If someone would kindly test this on a touchscreen tablet/phone... and make sure both VJ's work AT ALL, and work simultaneously (do some 2-thumb-ing)... that would be swell. thx! Maybe I need a PG that forces full-screen canvas... at run time? Wow this is super cool! Wingnut 1 Quote Link to comment Share on other sites More sharing options...
Wingnut Posted August 6, 2018 Author Share Posted August 6, 2018 Thanks. I wish I had a touchscreen mobile tablet or phone... so I could do a 2-thumb test on #55 PG... to see if both VJ's work at the same time (or at all). #56 PG is also here... 2 VJ, fullscreen canvas, no pointerLock version. C'mon, somebody with a touchy-tablet mobile device thing... 2-thumb test this for me and report findings, please? thanks. #57 is another version, in case 56 doesn't start full-screen. Still learning about full-screen and pointerLock... having some inconsistencies here (lines 4/5) Quote Link to comment Share on other sites More sharing options...
JohnK Posted August 7, 2018 Share Posted August 7, 2018 Well done Wingy, with two thumbs I get translation and rotation at the same time on my Motorola E4 with Android 7.1.1 and on my little old tablet with Android 5.1? Wingnut 1 Quote Link to comment Share on other sites More sharing options...
DylanD Posted August 7, 2018 Share Posted August 7, 2018 16 hours ago, Wingnut said: Thanks. I wish I had a touchscreen mobile tablet or phone... so I could do a 2-thumb test on #55 PG... to see if both VJ's work at the same time (or at all). #56 PG is also here... 2 VJ, fullscreen canvas, no pointerLock version. C'mon, somebody with a touchy-tablet mobile device thing... 2-thumb test this for me and report findings, please? thanks. tested PG#56 and could use both thumbs, however if one of my thumbs slid into the other thumbs area it would take over that thumbs position. But it works! Wingnut 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.