xtreemze Posted June 9, 2018 Share Posted June 9, 2018 My question is in reference to audio tied to a mesh, so volume and panning are handled by the BABYLON engine. This works great for wav and mp3 audio files but lately I've wanted to use generators like tone.js to generate tones with javascript. It seems that BABYLON audio requires a URL to insert audio into the audio chain but is there any way to use the audio engine in BABYLON with tones generated with tone.js and the web audio API without rendering them to wav or mp3? Quote Link to comment Share on other sites More sharing options...
Guest Posted June 11, 2018 Share Posted June 11, 2018 pinging @davrous Quote Link to comment Share on other sites More sharing options...
JCPalmer Posted June 11, 2018 Share Posted June 11, 2018 If tone.js can produce an AudioBuffer, or you can make one from all of the parts available, then you can supply this instead in the BABYLON.Sound constructor. Things are not looking promising otherwise, though can you produce an Audio element? Quote Link to comment Share on other sites More sharing options...
davrous Posted June 12, 2018 Share Posted June 12, 2018 I'm going to have a look to tone.js to tell you but as a first glance, it doesn't seem to be made to be easily integrated. Quote Link to comment Share on other sites More sharing options...
xtreemze Posted June 13, 2018 Author Share Posted June 13, 2018 On 6/11/2018 at 7:39 PM, JCPalmer said: If tone.js can produce an AudioBuffer, or you can make one from all of the parts available, then you can supply this instead in the BABYLON.Sound constructor. Things are not looking promising otherwise, though can you produce an Audio element? Interesting that there are other possibilities to pass into the BABYLON.Sound constructor. But in this case, if I'm not mistaken, the buffer would not be real-time sound, as it would somehow be buffered/pre-rendered. My intention is to use variables that constantly change per frame to affect specific parameters of the synthesizers so recorded sound would not be an ideal solution. Nonetheless, there would be about 20ms of delay to play with so the sound might still be perceived as immediate. 12 hours ago, davrous said: I'm going to have a look to tone.js to tell you but as a first glance, it doesn't seem to be made to be easily integrated. Thanks Davrous! I appreciate you taking the time to have a look. Quote Link to comment Share on other sites More sharing options...
davrous Posted June 13, 2018 Share Posted June 13, 2018 You're right about the delay. Ideally, I would need to take their audio graph and connect to the input node of the BABYLON.Sound object. I've seen in their samples & doc I can provide my own audioContext, which is good but I haven't seen how to get the output audio node object to connect to my own audio graph. It then seems we would need to both modify the audio engine of Babylon.js and tone.js to make this works. Indeed, I don't have the option to customize my audio graph today like that because I haven't thought about such a use case. For now, we're both creating an audioContext and both connecting to the audio destination (speakers). To make what you'd like working, it would need to let Babylon.js taking control of the audio destination and panning node (to have spatialization) and put in between the procedural audio generation of tone.js Quote Link to comment Share on other sites More sharing options...
xtreemze Posted June 13, 2018 Author Share Posted June 13, 2018 8 minutes ago, davrous said: You're right about the delay. Ideally, I would need to take their audio graph and connect to the input node of the BABYLON.Sound object. I've seen in their samples & doc I can provide my own audioContext, which is good but I haven't seen how to get the output audio node object to connect to my own audio graph. It then seems we would need to both modify the audio engine of Babylon.js and tone.js to make this works. Indeed, I don't have the option to customize my audio graph today like that because I haven't thought about such a use case. For now, we're both creating an audioContext and both connecting to the audio destination (speakers). To make what you'd like working, it would need to let Babylon.js taking control of the audio destination and panning node (to have spatialization) and put in between the procedural audio generation of tone.js Thanks Davrous for taking such a deep look into this. Since it's not feasible I'll mark this as solved as I just wanted to see if it was easily done with the current APIs. davrous 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.