Don’t worry, it’s here! An API that will not only whiten your teeth and improve your posture, but will add sound to your website in a wonderful way*. It’s the Web Audio API! Never heard of it? Don’t worry. This tutorial will get you up and running in no time.
The Web Audio API is a high-level way of creating and manipulating sound directly in the browser via JavaScript. It allows you to either generate audio from scratch or load and manipulate any existing audio file you may have. It's extremely powerful, even having its own timing system to provide split-second playback.
“Can’t I just use the <audio>
element?” Well, yes, but it really depends on what your use-case is. The <audio>
element is perfect for embedding and playing audio clips such as music or podcasts, but if you require a bit more control, such as programmatically controlling volume or adding effects, then the Web Audio API will be right up your Tin Pan Alley.
Make a Sound
Let’s dive right in. To start playing with the Web Audio API, we need to make sure we’re using a browser that supports it. Let’s check caniuse.com. Looks like browser support is pretty good—only Internet Explorer doesn’t support the API at the moment, but that will change soon, as it’s currently being implemented for inclusion in the next major release.
Let’s keep things simple by creating a basic HTML page with a <script>
element and the following content.
<!doctype html> <html> <head> <title>Web Audio API</title> </head> <body> <h1>Welcome to the Web Audio API</h1> <script> // Create the audio context var audioContext = new AudioContext(); // If you're using Safari, you'll need to use this line instead var audioContext = new webkitAudioContext(); </script> </body> </html>
The AudioContext is a little container where all our sound will live. It provides access to the Web Audio API, which in turn gives us access to some very powerful functions. Before we continue, however, it’s essential to understand an important concept of the Web Audio API: nodes.
Nodes
Let’s take the curly-haired astrophysicist and Queen guitarist Brian May as an example. When Brian wants to play his guitar, he takes a lead from his guitar and connects it to an effect pedal like a distortion pedal. He then connects another lead from his distortion pedal to either another effect or to his amplifier. This allows sound to travel from his guitar, get manipulated, and then be outputted to a speaker so people can hear his rock riffs. This is exactly how the Web Audio API works. Sound is passed around from one node to the next, being manipulated as it goes, before being finally outputted to your speakers.
Here’s a basic example. Add the following to your <script>
tag.
var context = new AudioContext(), oscillator = context.createOscillator(); // Connect the oscillator to our speakers oscillator.connect(context.destination);
Here we’ve created an oscillator. An oscillator is a type of sound generator that will provide us with a simple tone. We've taken a lead from the oscillator and connected it to our speakers, otherwise known in web audio land as context.destination
.
Now that everything's connected, we just need to start the oscillator so we can hear it. Make sure your speakers aren’t turned up too loud!
// Start the oscillator now oscillator.start(context.currentTime);
You should now hear something when your page loads. To stop your oscillator playing after a few seconds, simply add the following.
// Stop the oscillator 3 seconds from now oscillator.stop(context.currentTime + 3);
Hear something? Well done, you've just made sound in the browser!
Audio Files
Now, you may be thinking “Oscillators?! I don’t have time for this, I’m an important business person with lots of business meetings and business lunches to go to!”, which is perfectly ok. Making sound in this way isn’t for everyone. Luckily, there's another way.
Let’s say instead you want to play an ordinary run of the mill mp3 file. The Web Audio API can do this too. First we have to load in the audio file via our old friend the XMLHttpRequest. Remember that when loading files using this method, your page will have to be served via a server and not just loaded from your local filesystem. To avoid any complications, make sure your mp3 file is served in the same way and from the same location.
var request = new XMLHttpRequest(); request.open('GET', 'my.mp3', true); request.responseType = 'arraybuffer'; request.onload = function () { var undecodedAudio = request.response; }; request.send();
When the audio file is fully loaded by the browser, the onload
event fires and returns the audio data in the response attribute. At this point it's stored as an ArrayBuffer, but in order to get the audio data out of it we have to convert it to an AudioBuffer. Think of an AudioBuffer as a little container which holds our audio data in memory for us. To do this we use the decodeAudioData
function.
request.onload = function () { var undecodedAudio = request.response; context.decodeAudioData(undecodedAudio, function (buffer) { // The contents of our mp3 is now an AudioBuffer console.log(buffer); }); };
Once we've got an AudioBuffer holding our audio data, we need to find a way of playing it. You can't play an AudioBuffer directly—it needs to be loaded into a special AudioBufferSourceNode
. This node is like a record player, while the buffer is the vinyl record with the music on it. Or to bring my analogy up-to-date, the node is like a tape deck and the buffer is a cassette...
request.onload = function () { var undecodedAudio = request.response; context.decodeAudioData(undecodedAudio, function (buffer) { // Create the AudioBufferSourceNode var sourceBuffer = context.createBufferSource(); // Tell the AudioBufferSourceNode to use this AudioBuffer. sourceBuffer.buffer = buffer; }); };
The record is now on the record player ready to play. But remember, we're using the Web Audio API, and the Web Audio API requires we link nodes together in order to send the sound to our speakers. So let's just do what we did previously with our oscillator, and connect our source node to our speakers (context.destination
).
request.onload = function () { var undecodedAudio = request.response; context.decodeAudioData(undecodedAudio, function (buffer) { var sourceBuffer = context.createBufferSource(); sourceBuffer.buffer = buffer; sourceBuffer.connect(context.destination); }); };
Again, now that everything is connected, we can easily start playing the contents of the mp3 by telling the AudioBufferSourceNode to play at this very moment in time.
sourceBuffer.start(context.currentTime);
Beautiful!
Summary
In this tutorial we've learned how to use the Web Audio API to create a sound natively within the browser, as well as how to load and play an mp3 file. The API is capable of much more, and I look forward to showing you its potential in future tutorials.
All the code from this tutorial is available on GitHub.
*The Web Audio API sadly doesn't currently support whitening teeth or improving posture.
Comments