-
Notifications
You must be signed in to change notification settings - Fork 107
Description
Similar to #46, but I think a better approach is to write implement this as a Web Audio Node. Here is what I propose:
const waveformDataNode = createWaveformDataNode(audioContext, { scale: 512 }, (fullWaveform, sampleWaveform) => {
// This callback gets two arguments:
// Argument 1 (fullWaveform): Is the full waveform for every sample that has been processed
// Argument 2 (sampleWaveform): Is the waveform only for the current sample.
});
anyWebAudioNode.connect(waveformDataNode);
waveformNode.connect(anyOtherWebAudioNode);
// API call to start gathering Waveform data as it happens, passthrough otherwise
waveformNode.beginRender();
Couple thoughts:
-
By implementing this as a web audio node (ScriptProcessorNode or possible AudioWorklet), we can generate a waveform from any audio data that can be processed via Web Audio API. That means that we can get waveform data from attached gain nodes and microphone streams.
-
This approach works with either a regular "AudioContext" or an "OfflineAudioContext".
-
My initial implementation was just a "container" Waveform that held an array of other Waveforms. This may or may not be the best way, but I don't have a lot of experience (or use cases) for zooming in and out. Some help on this front would be much appreciated @chrisn
-
There are some tasks that need to be done before this can be production ready, most notably this: Remove InlineWorker from closure in getAudioDecoder #61. We simply cannot spin up a new Worker instance for every sample. It's very inefficient and has caused my browser to spin many times.