-
-
Notifications
You must be signed in to change notification settings - Fork 19
Enable Native External Audio Processing via TurboModules #469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Introduced ExternalAudioProcessor and ExternalAudioProcessorRegistry to enable modular, native-side buffer-level audio processing. This design allows developers to register or unregister custom DSP logic (e.g., 3rd party dsp libraries or custom dsp, volume reduction, etc) directly from a TurboModule, without modifying AudioNode internals or routing audio through the JS layer. All processing occurs natively in C++ for optimal performance. This structure keeps the core engine untouched while offering flexible runtime control for external processors.
Updated AudioNode::processAudio to optionally route raw buffer data to an external processor, if one is registered. This enables native buffer-level DSP (e.g., gating, eq, 3rd party DSP, things that may not be offer directly with react-native-audio-api) without modifying internal engine structures. The design supports full runtime control from TurboModules while preserving core stability. All audio processing remains on the native side and bypasses JS execution for performance.
I’m not sure where to include a working example, so I’ve attached a few sample files here. If approved, I’ll also add proper documentation. You can test the implementation using these files, and there’s also a demo video available to showcase it in action. DemoVideo.mp4 |
Hey, not sure where to respond, so will write everything here :) Yes, each node is a separate AudioNode, which means that in order to do such external processing node and be able to hook it up anywhere in the graph, I think we have to iterate it over a bit :) Instead of modifying the AudioNode directly, I would go for new separate node, that implements this, f.e. Which could be then exposed to JS/RN as a new node type, that we can connect to, f.e. const absn1 = audioContext.createBufferSource();
const absn2 = audioContext.createBufferSource();
const externalProcessor = audioContext.createCustomProcessor();
absn1.connect(externalProcessor);
absn2.connect(externalProcessor);
externalProcessor.connect(audioContext.destination); Regarding your question about timing, if we have: absn1.start(now + 0.01);
absn2.start(now + 0.01); both nodes will start exactly at the same time (or at the exact sample frame to be precise), but as I stated above, they do not share the same AudioNode instance. It is a bit more complex and I will be happy to explain how the pull-graph works within audio api or web audio, if you're interested :) But for this case, it might be not necessary. But overall great job! 🔥 |
Just so I understand so am I to model it similar to the GainNode and placed in the effect directory? |
Yes, exactly, if you are willing to wait, I will prep all the required interfaces for you over the weekend :) |
Yes, I can wait if you prep the required interfaces. Thanks! |
Hey! Hope you had a good weekend 🙂 Just checking in to see if you had a chance to put together the interfaces. No rush at all—just excited to keep moving forward when you’re ready. Thanks again! |
Hey, hey, unfortunately hadn't a chance to look at it, will figure out something along the week :) |
Removed externalCustomProcessor utility and reverted changes to AudioNode as suggested by the maintainer. Moving forward with implementing a proper Custom Node approach as recommended.
Removed externalCustomProcessor utility and reverted changes to AudioNode as suggested by the maintainer. Moving forward with implementing a proper Custom Node approach as recommended.
Hey, hey how it is going with the PR? :) |
Hey, it’s progressing well. I anticipate completing it by Tuesday. I’m currently testing and making a few adjustments. |
@michalsek Everything is complete and ready for your review. Let me know if any changes are needed. For testing purposes, I’ve included a zip file containing my Turbo module and the index.ts file. demo-2.mp4 |
@jerryseigle If you don't mind, could you please share the complete codebase .zip file? Super thanks. ✨ |
@vikalp-mightybyte zip file is too large to upload. Yes I am using Expo. You will need Expo Development not Expo Go. You also need the iOS and android folders. If you need those folders you can run this command |
That part I understood, but for me adding |
@michalsek PR is complete. Added support for CustomProcessorNode via TurboModule. Ready for review. |
This PR introduces a clean and modular system for injecting native audio processing logic into react-native-audio-api through the use of ExternalAudioProcessor and a global registry.
Developers can now:
• Register and unregister external processors from a TurboModule at runtime
• Write custom DSP (digital signal processing) code in C++
• Perform advanced buffer-level audio manipulation directly within the native render cycle
• Avoid any reliance on the JS thread — all processing runs fully native
This allows integration of virtually any kind of audio processing, whether:
• Custom-written code tailored to your app’s needs
• Third-party DSP libraries (e.g., for pitch/tempo manipulation, watermarking, audio detection, etc.)
All without modifying the core AudioNode logic — keeping the system clean, flexible, and decoupled.
✅ Checklist
• Added ExternalAudioProcessor interface
• Implemented singleton processor registry
• Injected processing safely into AudioNode::processAudio
• Provided TurboModule demo for runtime control (volume reducer)
• Documented and isolated logic for easy integration