Jump to content

digitalicarus

Members
  • Posts

    7
  • Joined

  • Last visited

Everything posted by digitalicarus

  1. If I remember correctly from experience a couple years ago there was a pause with many source types and the pause was disparate between browsers. That's why this library was loading two copies of the source and sequencing based on the browser offset. Browsers may have fixed this, but it was very frustrating at the time since they were all a bit different.
  2. https://github.com/Hivenfour/SeamlessLoop I used this a while back for a thrust sound on a lander game. https://github.com/digitalicarus/lander/blob/master/js/game.js#L108 https://github.com/digitalicarus/lander/blob/master/js/game.js#L275
  3. That's what I meant: the authors of the browsers.
  4. I don't think that canvas content rendering and overall webpage compositing are necessarily a connected process. According to this Mozilla graphics team blog applying 3d transforms doesn't even necessarily indicate that there will be GPU acceleration, only that the browser will put that object in another layer which can make compositing faster: http://mozillagfx.wordpress.com/2013/07/22/hardware-acceleration-and-compositing/ I think the CSS transform is a process that runs outside of regular canvas rendering, but we should look to the authorities (people writing the code) or the code itself before drawing conclusions.
  5. You know, you may be right. I should be careful about giving advice that's not completely confirmed. I've recently possibly fooled myself into believing this when I was working on lighting for my ray caster and disabled the css scaling property. The frame rate was much better so I decided right then to try the 3d versions of the transforms instead. When I switched to 3d I saw an improvement in frame rate, but this isn't a very scientific test. When I look at the transforms W3C spec there's nothing specific about how these routines should be performed, just a prescription for the properties and the behavior. So, like other things, it's up to the browser vendors to implement acceleration for various properties. So, at this point, both of our assertions are conjecture. I would like to know the truth. I'm trying to comb through the webkit source just to see what the truth is for that browser but I'm unfamiliar with it and get lost easily. https://github.com/WebKit/webkit/tree/master/Source/WebCore/platform/graphics/transforms I'm not super sure if I'm interpreting this right, but it does seem that at the point where a scale operation object is instantiated that it's normalized to the same object and that any missing z value is replaced with 1. https://github.com/WebKit/webkit/blob/master/Source/WebCore/platform/graphics/transforms/ScaleTransformOperation.cpp#L40 It also looks like they have abstracted the creation of this object via two prototypes and that if something in the code omits the z value that it is again invoked with a default of 1. https://github.com/WebKit/webkit/blob/master/Source/WebCore/platform/graphics/transforms/ScaleTransformOperation.h#L34 So, even though I'm not 100% sure that I read it correctly I'm willing to concede in the case of webkit. The point still stands that browsers can implement the specs with a variety of approaches. That's one down (I hope), who wants to help with Gecko and the others? Thanks for keeping me honest rich.
  6. Be sure to try css3d transforms first, then fall back. If you don't the scaling won't be GPU bound and your game performance can suffer somewhat significantly. Here's one way to do it: https://github.com/digitalicarus/caster/blob/master/js/shared.js#L67 Also, scaling can be configured a bit in some browsers via some css properties: https://github.com/digitalicarus/caster/blob/master/index.html#L30 In action: http://digitalicarus.com/game/caster/ Good luck!
  7. Your time will be taken up with logic and rendering mostly. If you want a bit of OO and some separation I recommend a combination of AMD with requireJS for modules sprinkled with John Resig's simple inheritance for classes and extensibility: http://ejohn.org/blog/simple-javascript-inheritance/ I'm currently writing a JS raycaster you could look at for an example of some of these things. It's a little messy cause I'm refactoring the casting from the game module into a cast one: https://github.com/digitalicarus/caster Another game (lode runner clone) I started and didn't finish used resig's method and modules: https://github.com/wee-enterprises/motherlode. You don't even have to use just one of these techniques. You can mix and match. I use basic JS OO for things that won't need to be extended (engine stuff ususally) and resig's small OO code for things like entities. Don't concern yourself too much with these minor nuances. JSperf is not super accurate. A difference of even up to 8+% can just be circumstantial noise (run LOTs of samples). Write a lot of code that does things you actually want to do and profile it. It's great to be performance minded, but real testing on real game code trumps small contrived testing. For instance, I increased the performance of my caster recently simply by pre generating all my trig tables. This was by far even more performant than the switch to typed arrays (which I also did). JS objects are quite fast. That being said, JSPerf is a lot of fun and good to litmus test when you need quick info. Just beware the contrived nature of the test and the fact that many other factors in the browser affect JS performance so you'll get a lot of noise in your data.
×
×
  • Create New...