Mixing two audio buffer
First, we define a function to create a AudioBufferSourceNode from a url and AudioContext.
async function createBufferSourceNode(url, context) {
const bufferSourceNode = context.createBufferSource();
const res = await fetch(url);
const arrayBuffer = await res.arrayBuffer();
const audioBuffer = await context.decodeAudioData(arrayBuffer);
bufferSourceNode.buffer = audioBuffer;
return bufferSourceNode;
}
Next, we load two audio files and all connect to destination.
const audioCtx = new AudioContext();
const bufferSourceNodeA = await createBufferSourceNode("./char.m4a", audioCtx);
const bufferSourceNodeB = await createBufferSourceNode("./number.m4a", audioCtx);
bufferSourceNodeA.connect(audioCtx.destination);
bufferSourceNodeB.connect(audioCtx.destination);
At last, we can play two audio using start function. Note that this operator should be called under a user gesture. For example, we put them here in a button's onclick callback.
startBtn.onclick = () => {
bufferSourceNodeA.start();
bufferSourceNodeB.start();
}
Now, you could hear two audio playing together.
Mix microphone input and audio buffer
The process is almost the same, but this time we creat a MediaStreamAudioSourceNode and connect it to destination.
startBtn.onclick = async () => {
const audioCtx = new AudioContext();
const bufferSourceNode = await createBufferSourceNode("./char.m4a", audioCtx);
bufferSourceNode.loop = true;
// this is to capture microphone input
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const mediaStreamSourceNode = audioCtx.createMediaStreamSource(stream);
bufferSourceNode.connect(audioCtx.destination);
mediaStreamSourceNode.connect(audioCtx.destination);
bufferSourceNode.start();
}
When we run this, we should hear Audio feedback.
Mix audio element and audio buffer
First, we have a audio
tag.
<audio id="audio" controls src="./number.m4a"></audio>
Then, we create a MediaElementAudioSourceNode.
const audioPlayer = document.getElementById("audio");
startBtn.onclick = async () => {
const audioCtx = new AudioContext();
const bufferSourceNode = await createBufferSourceNode("./char.m4a", audioCtx);
const audioElementNode = audioCtx.createMediaElementSource(audioPlayer);
bufferSourceNode.connect(audioCtx.destination);
audioElementNode.connect(audioCtx.destination);
bufferSourceNode.start();
audioPlayer.play();
}
Control the volume of each
We could control the volume of each audio by creating a GainNode for each source.
startBtn.onclick = async () => {
const audioCtx = new AudioContext();
const bufferSourceNodeA = await createBufferSourceNode("./char.m4a", audioCtx);
const bufferSourceNodeB = await createBufferSourceNode("./number.m4a", audioCtx);
const gainNodeA = audioCtx.createGain();
gainNodeA.gain.value = 0.3;
const gainNodeB = audioCtx.createGain();
gainNodeB.gain.value = 3;
bufferSourceNodeA.connect(gainNodeA);
gainNodeA.connect(audioCtx.destination);
bufferSourceNodeB.connect(gainNodeB);
gainNodeB.connect(audioCtx.destination);
bufferSourceNodeA.start();
bufferSourceNodeB.start();
}
Output to mediaStream
Now, we change the output to MediaStreamAudioDestinationNode, then we could send this to remote user using WebRTC.
startBtn.onclick = async () => {
const audioCtx = new AudioContext();
const bufferSourceNodeA = await createBufferSourceNode("./char.m4a", audioCtx);
const bufferSourceNodeB = await createBufferSourceNode("./number.m4a", audioCtx);
const dest = audioCtx.createMediaStreamDestination();
bufferSourceNodeA.connect(dest);
bufferSourceNodeB.connect(dest);
bufferSourceNodeA.start();
bufferSourceNodeB.start();
const medisStream = dest.stream;
const audioTrack = medisStream.getAudioTracks()[0];
// send this track
}