Skip to content

Enable Native External Audio Processing via TurboModules #469

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

jerryseigle
Copy link

This PR introduces a clean and modular system for injecting native audio processing logic into react-native-audio-api through the use of ExternalAudioProcessor and a global registry.

Developers can now:
• Register and unregister external processors from a TurboModule at runtime
• Write custom DSP (digital signal processing) code in C++
• Perform advanced buffer-level audio manipulation directly within the native render cycle
• Avoid any reliance on the JS thread — all processing runs fully native

This allows integration of virtually any kind of audio processing, whether:
• Custom-written code tailored to your app’s needs
• Third-party DSP libraries (e.g., for pitch/tempo manipulation, watermarking, audio detection, etc.)

All without modifying the core AudioNode logic — keeping the system clean, flexible, and decoupled.

✅ Checklist
• Added ExternalAudioProcessor interface
• Implemented singleton processor registry
• Injected processing safely into AudioNode::processAudio
• Provided TurboModule demo for runtime control (volume reducer)
• Documented and isolated logic for easy integration

Introduced ExternalAudioProcessor and ExternalAudioProcessorRegistry to enable modular, native-side buffer-level audio processing. This design allows developers to register or unregister custom DSP logic (e.g., 3rd party dsp libraries or custom dsp, volume reduction, etc) directly from a TurboModule, without modifying AudioNode internals or routing audio through the JS layer.  All processing occurs natively in C++ for optimal performance. This structure keeps the core engine untouched while offering flexible runtime control for external processors.
Updated AudioNode::processAudio to optionally route raw buffer data to an external processor, if one is registered. This enables native buffer-level DSP (e.g., gating, eq, 3rd party DSP, things that may not be offer directly with react-native-audio-api) without modifying internal engine structures. The design supports full runtime control from TurboModules while preserving core stability.

All audio processing remains on the native side and bypasses JS execution for performance.
@jerryseigle
Copy link
Author

I’m not sure where to include a working example, so I’ve attached a few sample files here. If approved, I’ll also add proper documentation. You can test the implementation using these files, and there’s also a demo video available to showcase it in action.
shared.zip

DemoVideo.mp4

@michalsek
Copy link
Member

Hey, not sure where to respond, so will write everything here :)

Yes, each node is a separate AudioNode, which means that in order to do such external processing node and be able to hook it up anywhere in the graph, I think we have to iterate it over a bit :)

Instead of modifying the AudioNode directly, I would go for new separate node, that implements this, f.e. CustomAudioNode, it would be inheriting from AudioNode and overwrite the processAudio or processNode method.

Which could be then exposed to JS/RN as a new node type, that we can connect to, f.e.

const absn1 = audioContext.createBufferSource();
const absn2 = audioContext.createBufferSource();
const externalProcessor = audioContext.createCustomProcessor();

absn1.connect(externalProcessor);
absn2.connect(externalProcessor);

externalProcessor.connect(audioContext.destination);

Regarding your question about timing, if we have:

absn1.start(now + 0.01);
absn2.start(now + 0.01);

both nodes will start exactly at the same time (or at the exact sample frame to be precise), but as I stated above, they do not share the same AudioNode instance. It is a bit more complex and I will be happy to explain how the pull-graph works within audio api or web audio, if you're interested :)

But for this case, it might be not necessary.

But overall great job! 🔥
I've imagined that external c++ integrations would be much harder to implement :)

@jerryseigle
Copy link
Author

jerryseigle commented May 28, 2025

Hey, not sure where to respond, so will write everything here :)

Yes, each node is a separate AudioNode, which means that in order to do such external processing node and be able to hook it up anywhere in the graph, I think we have to iterate it over a bit :)

Instead of modifying the AudioNode directly, I would go for new separate node, that implements this, f.e. CustomAudioNode, it would be inheriting from AudioNode and overwrite the processAudio or processNode method.

Which could be then exposed to JS/RN as a new node type, that we can connect to, f.e.

const absn1 = audioContext.createBufferSource();
const absn2 = audioContext.createBufferSource();
const externalProcessor = audioContext.createCustomProcessor();

absn1.connect(externalProcessor);
absn2.connect(externalProcessor);

externalProcessor.connect(audioContext.destination);

Regarding your question about timing, if we have:

absn1.start(now + 0.01);
absn2.start(now + 0.01);

both nodes will start exactly at the same time (or at the exact sample frame to be precise), but as I stated above, they do not share the same AudioNode instance. It is a bit more complex and I will be happy to explain how the pull-graph works within audio api or web audio, if you're interested :)

But for this case, it might be not necessary.

But overall great job! 🔥 I've imagined that external c++ integrations would be much harder to implement :)

Just so I understand so am I to model it similar to the GainNode and placed in the effect directory?

@michalsek
Copy link
Member

Just so I understand so am I to model it similar to the GainNode and placed in the effect directory?

Yes, exactly, if you are willing to wait, I will prep all the required interfaces for you over the weekend :)

@jerryseigle
Copy link
Author

Just so I understand so am I to model it similar to the GainNode and placed in the effect directory?

Yes, exactly, if you are willing to wait, I will prep all the required interfaces for you over the weekend :)

Yes, I can wait if you prep the required interfaces. Thanks!

@jerryseigle
Copy link
Author

Just so I understand so am I to model it similar to the GainNode and placed in the effect directory?

Yes, exactly, if you are willing to wait, I will prep all the required interfaces for you over the weekend :)

Hey! Hope you had a good weekend 🙂 Just checking in to see if you had a chance to put together the interfaces. No rush at all—just excited to keep moving forward when you’re ready. Thanks again!

@michalsek
Copy link
Member

Hey, hey,

unfortunately hadn't a chance to look at it, will figure out something along the week :)

Removed externalCustomProcessor utility and reverted changes to AudioNode as suggested by the maintainer. Moving forward with implementing a proper Custom Node approach as recommended.
Removed externalCustomProcessor utility and reverted changes to AudioNode as suggested by the maintainer. Moving forward with implementing a proper Custom Node approach as recommended.
@michalsek
Copy link
Member

Hey, hey how it is going with the PR? :)

@jerryseigle
Copy link
Author

Hey, hey how it is going with the PR? :)

Hey, it’s progressing well. I anticipate completing it by Tuesday. I’m currently testing and making a few adjustments.

@jerryseigle
Copy link
Author

jerryseigle commented Jun 8, 2025

@michalsek Everything is complete and ready for your review. Let me know if any changes are needed.

For testing purposes, I’ve included a zip file containing my Turbo module and the index.ts file.
Archive.zip

demo-2.mp4

@vikalp-mightybyte
Copy link

@jerryseigle
Thanks for this awesome feature - you added just at the right time when I needed it.

If you don't mind, could you please share the complete codebase .zip file?
Since, I'm also using Expo (I see in your demo video you use Expo) and unable to configure project for Turbo modules.

Super thanks. ✨

@jerryseigle
Copy link
Author

@jerryseigle Thanks for this awesome feature - you added just at the right time when I needed it.

If you don't mind, could you please share the complete codebase .zip file? Since, I'm also using Expo (I see in your demo video you use Expo) and unable to configure project for Turbo modules.

Super thanks. ✨

@vikalp-mightybyte zip file is too large to upload. Yes I am using Expo. You will need Expo Development not Expo Go. You also need the iOS and android folders. If you need those folders you can run this command npx expo run:ios --device and if you need for android run same command but with android. Then after follow the steps from the React Native site on setting up Turbo Module in pure c++. After you can just copy the content from the shared.zip I uploaded above

@vikalp-mightybyte
Copy link

@jerryseigle Thanks for this awesome feature - you added just at the right time when I needed it.
If you don't mind, could you please share the complete codebase .zip file? Since, I'm also using Expo (I see in your demo video you use Expo) and unable to configure project for Turbo modules.
Super thanks. ✨

@vikalp-mightybyte zip file is too large to upload. Yes I am using Expo. You will need Expo Development not Expo Go. You also need the iOS and android folders. If you need those folders you can run this command npx expo run:ios --device and if you need for android run same command but with android. Then after follow the steps from the React Native site on setting up Turbo Module in pure c++. After you can just copy the content from the shared.zip I uploaded above

That part I understood, but for me adding #include <audioapi/core/effects/CustomProcessorNode.h> isn't working.
It's unable to find it. Any suggestions? (I already spent quite a lot of hours with ChatGPT)

@jerryseigle
Copy link
Author

@michalsek PR is complete. Added support for CustomProcessorNode via TurboModule. Ready for review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants