To access your computer's microphone with your browser, you need to call the
navigator.mediaDevices.getUserMedia()
function. This takes in one argument: an object that describes what media to
access. This object can look something like this:
{
"audio": true,
"video": true,
}
For our purposes, we want just the audio. Our full code will be as follows:
const audioCtx = new AudioContext();
if (navigator.mediaDevices) {
navigator.mediaDevices.getUserMedia({"audio": true}).then((stream) => {
const microphone = audioCtx.createMediaStreamSource(stream);
// `microphone` can now act like any other AudioNode
}).catch((err) => {
// browser unable to access microphone
// (check to see if microphone is attached)
});
} else {
// browser unable to access media devices
// (update your browser)
}
The initial if (navigator.mediaDevices)
checks to see whether the browser
you're using is able to access the microphone. If this check fails, that
probably means you need to upgrade your browser.
On the next line, the .getUserMedia()
function returns a special data type
called a "promise". For now, all you need to know about promises is that they
take care of asynchronous processing. The anonymous callback function passed
in as an argument to the .then()
method will only run once the browser has
safely accessed the computer's microphone. Contrariwise, the .catch()
method
will only run if the browser has failed to access the microphone, which usually
means your computer lacks an internal microphone, so you need to plug one in.
Finally, the
audioCtx.createMediaStreamSource()
method takes the
incoming microphone data and converts it into an ordinary AudioNode
. You can
think of it as the equivalent of an adc~ object in Max. Since our incoming
microphone data is now an AudioNode
, you can easily incorporate it into an
AudioNode
network graph, letting you do things like control gain, apply
envelopes, add filters, and so on.
A quick word of warning: since the internal microphone on your laptop is so close to the speakers on your laptop, it is very difficult to test live audio processing algorithms without getting horrendous feedback. A good solution is to test your code while wearing headphones. Just be careful not to hurt your ears!
Let's try making a very simple voice memo recorder. For this, we will use the
MediaStream
recording API. Once we create a stream
using the promise structure above, we
can pass it into a media recorder as follows:
// Instantiate the media recorder.
const mediaRecorder = new MediaRecorder(stream);
// Create a buffer to store the incoming data.
let chunks = [];
mediaRecorder.ondataavailable = (event) => {
chunks.push(event.data);
}
The
MediaRecorder
is not an AudioNode
, nor does it save its incoming data automatically;
rather, you need to explicitly direct it to save its data into a buffer via its
ondataavailable
event handler. Here, the buffer is an array called chunks
, so named since
the MediaRecorder
is not actually receiving the data one sample at a time,
but rather in chunks of samples.
The mediaRecorder
instance can be controlled with the .start()
and
.stop()
methods. In order to do something with the recorded data, you will
need to inplement a
.onstop()
event handler method to the mediaRecorder
instance, like so:
mediaRecorder.onstop = () => {
// A "blob" combines all the audio chunks into a single entity
const blob = new Blob(chunks, {"type": "audio/ogg; codecs=opus"});
chunks = []; // clear buffer
// One of many ways to use the blob
const audio = new Audio();
const audioURL = window.URL.createObjectURL(blob);
audio.src = audioURL;
}
A Blob
object
combines all the audio chunks into a single entity; you can think of it as
effectively flattening a nested array of raw binary data. Then, the
window.URL.createObjectURL()
function lets you point to that blob with a new
URL. Since we already know the blob contains audio data, we can simply point a
new Audio()
object at it, and treat it like it was any other sound file.
A more fully-featured version of this voice memo application can be downloaded on the page below this video.
Download the files used in the above examples by right-clicking the links, and then selecting "Save Link As...".
Blob
MediaRecorder
MediaStreamAudioSourceNode
navigator.mediaDevices.getUserMedia()
window.URL.createObjectURL()