Pryme8 Posted March 1, 2016 Share Posted March 1, 2016 I am trying to find more documentation on how to use the visualizer, I would like to analyze audio with an offset to the play back because I would like to generate level features and enemy spawns from the sound file analysis. To what level can I break down the dataArray? I would like to do effectively listeners for certain aspects of the noise that will be set up on a prescan. I see the standard docs, but is there somewhere with more explanation on the methods?http://doc.babylonjs.com/classes/2.3/Analyser Quote Link to comment Share on other sites More sharing options...
davrous Posted March 1, 2016 Share Posted March 1, 2016 Hi, I've written a complete article on our Audio stack: https://blogs.msdn.microsoft.com/davrous/2015/11/05/creating-fun-immersive-audio-experiences-with-web-audio/ , you'll find links to demos using the analyser also where you'll be able to review the code. But generally speaking, Babylon.js is just directly exposing the Web Audio analyser. So once, you get it, you can use any sample or approach you may find on the web using it. There's also a couple of demos created by @Stvsynrj our analyser expert : http://synergy-development.fr/babylonyzer/ , http://www.babylonjs.com/Demos/Dancing%20CSG/ , http://www.babylonjs.com/Demos/AudioAnalyser/ For your question, I'm not sure to have perfectly understood what you'd like to achieve. You want to offline analyze an audio file to create marker based on specific filter to then generate action on it? Bye, David Stvsynrj and c75 2 Quote Link to comment Share on other sites More sharing options...
Pryme8 Posted March 1, 2016 Author Share Posted March 1, 2016 Yes David, I want to prior to playing the file create a buffer of a song then run it through a function that will look for certain frequency flags and also determine the sections of the song (intro, verse1, interlude ect) and will then create a Javascript object that I can use to construct a series of events that are tied to the sound playback. I think I could achieve the offset by loading the sound into a bufferArray and then offsetting the index by a certain amount (in respect to miliseconds). I am kinda familiar with the standard HTML5 Web Audio stuff, and have come up with standard canvas analyzers that can effectively do the flagging I need. Are you available for projects right now? If I can get this idea off the ground it will be a profitable project, and would be interested in taking on someone with a better background in Web Audio. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.