Skip to content

Enable Native External Audio Processing via TurboModules #469

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Jun 18, 2025

Conversation

jerryseigle
Copy link
Contributor

This PR introduces a clean and modular system for injecting native audio processing logic into react-native-audio-api through the use of ExternalAudioProcessor and a global registry.

Developers can now:
• Register and unregister external processors from a TurboModule at runtime
• Write custom DSP (digital signal processing) code in C++
• Perform advanced buffer-level audio manipulation directly within the native render cycle
• Avoid any reliance on the JS thread — all processing runs fully native

This allows integration of virtually any kind of audio processing, whether:
• Custom-written code tailored to your app’s needs
• Third-party DSP libraries (e.g., for pitch/tempo manipulation, watermarking, audio detection, etc.)

All without modifying the core AudioNode logic — keeping the system clean, flexible, and decoupled.

✅ Checklist
• Added ExternalAudioProcessor interface
• Implemented singleton processor registry
• Injected processing safely into AudioNode::processAudio
• Provided TurboModule demo for runtime control (volume reducer)
• Documented and isolated logic for easy integration

Introduced ExternalAudioProcessor and ExternalAudioProcessorRegistry to enable modular, native-side buffer-level audio processing. This design allows developers to register or unregister custom DSP logic (e.g., 3rd party dsp libraries or custom dsp, volume reduction, etc) directly from a TurboModule, without modifying AudioNode internals or routing audio through the JS layer.  All processing occurs natively in C++ for optimal performance. This structure keeps the core engine untouched while offering flexible runtime control for external processors.
Updated AudioNode::processAudio to optionally route raw buffer data to an external processor, if one is registered. This enables native buffer-level DSP (e.g., gating, eq, 3rd party DSP, things that may not be offer directly with react-native-audio-api) without modifying internal engine structures. The design supports full runtime control from TurboModules while preserving core stability.

All audio processing remains on the native side and bypasses JS execution for performance.
@jerryseigle
Copy link
Contributor Author

I’m not sure where to include a working example, so I’ve attached a few sample files here. If approved, I’ll also add proper documentation. You can test the implementation using these files, and there’s also a demo video available to showcase it in action.
shared.zip

DemoVideo.mp4

@michalsek
Copy link
Member

Hey, not sure where to respond, so will write everything here :)

Yes, each node is a separate AudioNode, which means that in order to do such external processing node and be able to hook it up anywhere in the graph, I think we have to iterate it over a bit :)

Instead of modifying the AudioNode directly, I would go for new separate node, that implements this, f.e. CustomAudioNode, it would be inheriting from AudioNode and overwrite the processAudio or processNode method.

Which could be then exposed to JS/RN as a new node type, that we can connect to, f.e.

const absn1 = audioContext.createBufferSource();
const absn2 = audioContext.createBufferSource();
const externalProcessor = audioContext.createCustomProcessor();

absn1.connect(externalProcessor);
absn2.connect(externalProcessor);

externalProcessor.connect(audioContext.destination);

Regarding your question about timing, if we have:

absn1.start(now + 0.01);
absn2.start(now + 0.01);

both nodes will start exactly at the same time (or at the exact sample frame to be precise), but as I stated above, they do not share the same AudioNode instance. It is a bit more complex and I will be happy to explain how the pull-graph works within audio api or web audio, if you're interested :)

But for this case, it might be not necessary.

But overall great job! 🔥
I've imagined that external c++ integrations would be much harder to implement :)

@jerryseigle
Copy link
Contributor Author

jerryseigle commented May 28, 2025

Hey, not sure where to respond, so will write everything here :)

Yes, each node is a separate AudioNode, which means that in order to do such external processing node and be able to hook it up anywhere in the graph, I think we have to iterate it over a bit :)

Instead of modifying the AudioNode directly, I would go for new separate node, that implements this, f.e. CustomAudioNode, it would be inheriting from AudioNode and overwrite the processAudio or processNode method.

Which could be then exposed to JS/RN as a new node type, that we can connect to, f.e.

const absn1 = audioContext.createBufferSource();
const absn2 = audioContext.createBufferSource();
const externalProcessor = audioContext.createCustomProcessor();

absn1.connect(externalProcessor);
absn2.connect(externalProcessor);

externalProcessor.connect(audioContext.destination);

Regarding your question about timing, if we have:

absn1.start(now + 0.01);
absn2.start(now + 0.01);

both nodes will start exactly at the same time (or at the exact sample frame to be precise), but as I stated above, they do not share the same AudioNode instance. It is a bit more complex and I will be happy to explain how the pull-graph works within audio api or web audio, if you're interested :)

But for this case, it might be not necessary.

But overall great job! 🔥 I've imagined that external c++ integrations would be much harder to implement :)

Just so I understand so am I to model it similar to the GainNode and placed in the effect directory?

@michalsek
Copy link
Member

Just so I understand so am I to model it similar to the GainNode and placed in the effect directory?

Yes, exactly, if you are willing to wait, I will prep all the required interfaces for you over the weekend :)

@jerryseigle
Copy link
Contributor Author

Just so I understand so am I to model it similar to the GainNode and placed in the effect directory?

Yes, exactly, if you are willing to wait, I will prep all the required interfaces for you over the weekend :)

Yes, I can wait if you prep the required interfaces. Thanks!

@jerryseigle
Copy link
Contributor Author

Just so I understand so am I to model it similar to the GainNode and placed in the effect directory?

Yes, exactly, if you are willing to wait, I will prep all the required interfaces for you over the weekend :)

Hey! Hope you had a good weekend 🙂 Just checking in to see if you had a chance to put together the interfaces. No rush at all—just excited to keep moving forward when you’re ready. Thanks again!

@michalsek
Copy link
Member

Hey, hey,

unfortunately hadn't a chance to look at it, will figure out something along the week :)

Removed externalCustomProcessor utility and reverted changes to AudioNode as suggested by the maintainer. Moving forward with implementing a proper Custom Node approach as recommended.
Removed externalCustomProcessor utility and reverted changes to AudioNode as suggested by the maintainer. Moving forward with implementing a proper Custom Node approach as recommended.
@michalsek
Copy link
Member

Hey, hey how it is going with the PR? :)

@jerryseigle
Copy link
Contributor Author

Hey, hey how it is going with the PR? :)

Hey, it’s progressing well. I anticipate completing it by Tuesday. I’m currently testing and making a few adjustments.

@jerryseigle
Copy link
Contributor Author

jerryseigle commented Jun 8, 2025

@michalsek Everything is complete and ready for your review. Let me know if any changes are needed.

For testing purposes, I’ve included a zip file containing my Turbo module and the index.ts file.
Archive.zip

demo-2.mp4

@vikalp-mightybyte
Copy link

@jerryseigle
Thanks for this awesome feature - you added just at the right time when I needed it.

If you don't mind, could you please share the complete codebase .zip file?
Since, I'm also using Expo (I see in your demo video you use Expo) and unable to configure project for Turbo modules.

Super thanks. ✨

@jerryseigle
Copy link
Contributor Author

@jerryseigle Thanks for this awesome feature - you added just at the right time when I needed it.

If you don't mind, could you please share the complete codebase .zip file? Since, I'm also using Expo (I see in your demo video you use Expo) and unable to configure project for Turbo modules.

Super thanks. ✨

@vikalp-mightybyte zip file is too large to upload. Yes I am using Expo. You will need Expo Development not Expo Go. You also need the iOS and android folders. If you need those folders you can run this command npx expo run:ios --device and if you need for android run same command but with android. Then after follow the steps from the React Native site on setting up Turbo Module in pure c++. After you can just copy the content from the shared.zip I uploaded above

@vikalp-mightybyte
Copy link

@jerryseigle Thanks for this awesome feature - you added just at the right time when I needed it.
If you don't mind, could you please share the complete codebase .zip file? Since, I'm also using Expo (I see in your demo video you use Expo) and unable to configure project for Turbo modules.
Super thanks. ✨

@vikalp-mightybyte zip file is too large to upload. Yes I am using Expo. You will need Expo Development not Expo Go. You also need the iOS and android folders. If you need those folders you can run this command npx expo run:ios --device and if you need for android run same command but with android. Then after follow the steps from the React Native site on setting up Turbo Module in pure c++. After you can just copy the content from the shared.zip I uploaded above

That part I understood, but for me adding #include <audioapi/core/effects/CustomProcessorNode.h> isn't working.
It's unable to find it. Any suggestions? (I already spent quite a lot of hours with ChatGPT)

@jerryseigle
Copy link
Contributor Author

@michalsek PR is complete. Added support for CustomProcessorNode via TurboModule. Ready for review.

Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds a new CustomProcessorNode to enable native audio processing via TurboModules, including registry management, JS/TS bindings, and a C++ implementation for real-time DSP.

  • Defines ProcessorMode and UUID in TS, plus a new ICustomProcessorNode interface
  • Implements CustomProcessorNode in JS/TS and integrates it into BaseAudioContext
  • Provides a full C++ implementation with factory/handler registries and host-object bindings

Reviewed Changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
packages/react-native-audio-api/src/types.ts Added ProcessorMode and UUID type aliases
packages/react-native-audio-api/src/interfaces.ts Introduced ICustomProcessorNode and createCustomProcessor
packages/react-native-audio-api/src/core/CustomProcessorNode.ts JS wrapper for the custom processor node
packages/react-native-audio-api/src/core/BaseAudioContext.ts Exposed createCustomProcessor() in JS context
packages/react-native-audio-api/src/api.ts Exported CustomProcessorNode from the public API
common/cpp/audioapi/core/effects/CustomProcessorNode.h Declared native CustomProcessorNode and processor interface
common/cpp/audioapi/core/effects/CustomProcessorNode.cpp Implemented processing logic, factories, and registries
common/cpp/audioapi/core/BaseAudioContext.h/.cpp Added native factory method and node registration
common/cpp/audioapi/HostObjects/CustomProcessorNodeHostObject.h Exposed JS host bindings for CustomProcessorNode
common/cpp/audioapi/HostObjects/BaseAudioContextHostObject.h Hooked up createCustomProcessor in the host object

@michalsek michalsek added the development Develop some feature or integrate it with sth label Jun 13, 2025
@michalsek michalsek added development Develop some feature or integrate it with sth high priority labels Jun 13, 2025
Copy link
Collaborator

@maciejmakowski2003 maciejmakowski2003 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's merge!

@maciejmakowski2003 maciejmakowski2003 merged commit fb32708 into software-mansion:main Jun 18, 2025
5 of 6 checks passed
@vikalp-mightybyte
Copy link

@michalsek @maciejmakowski2003 @jerryseigle
Can we please also update documentation to show how it?
and, When will this be released, expectedly?

@michalsek
Copy link
Member

@vikalp-mightybyte definitely will add it to docs! :)

@maciejmakowski2003 maciejmakowski2003 added the feature New feature label Jun 23, 2025
maciejmakowski2003 added a commit that referenced this pull request Jun 25, 2025
* chore: released 0.6.1 (#485)

Co-authored-by: Maciej Makowski <maciej.makowski2608@gmail.com>

* feat: add haptics support for iOS audio sessions

* revert changes to audio-manager.mdx

* refactor: change haptics configuration from iosOptions to dedicated iosAllowHaptics field

* fix: remove explicit base class destructor calls in AudioRecorder subclasses

* Merge pull request #489 from software-mansion/michalsek-patch-1

Update README.md

* Update README.md (#490)

* feat: implemented decoding pcm in base64 (#486)

Co-authored-by: Maciej Makowski <maciej.makowski2608@gmail.com>

* Cant wait to do that - PAUSE in RN-Audio-API (#491)

* feat: implemented pause on AudioBufferQueueSourceNode

* ci: yarn format

---------

Co-authored-by: Maciej Makowski <maciej.makowski2608@gmail.com>

* fix: fixed stop (#492)

Co-authored-by: Maciej Makowski <maciej.makowski2608@gmail.com>

* fix: fixed spec alignment on android (#493)

Co-authored-by: Maciej Makowski <maciej.makowski2608@gmail.com>

* feat: position event in audio buffer

* fix: refactor param processing

* fix: additional conditions to if that handled looping

* fix: typo

* feat: added parameter to steer skipping track if setting loopStart

* fix: format

* docs: integrated few-line pages into their main objects (#501)

* docs: added docs for audio recorder (#502)

Co-authored-by: Maciej Makowski <120780663+maciejmakowski2003@users.noreply.github.com>

* Fix/android/lock screen info (#503)

* refactor: refactored AudioFile example

* fix: fixed setting artwork

* fix: fixed ABQSN onPositionChange event type

---------

Co-authored-by: Maciej Makowski <maciej.makowski2608@gmail.com>

* fix: fixed order in ctor (#508)

Co-authored-by: Maciej Makowski <maciej.makowski2608@gmail.com>

* Refactor/audio buffer base source node (#504)

* refactor: added AudioBufferBaseSourceNode class as a base for all sources playing AudioBuffers

* ci: yarn format

* fix: fixed range of bpm in drums

* refactor: aligned onended with web

* refactor: updated web onended event type

* chore: updated PR template

* refactor: further refactoring

* ci: yarn format

* refactor: further refactoring

* refactor: removed bufferId arg from enqueueBuffer

---------

Co-authored-by: Maciej Makowski <maciej.makowski2608@gmail.com>

* Enable Native External Audio Processing via TurboModules (#469)

* Add support for external audio processors

Introduced ExternalAudioProcessor and ExternalAudioProcessorRegistry to enable modular, native-side buffer-level audio processing. This design allows developers to register or unregister custom DSP logic (e.g., 3rd party dsp libraries or custom dsp, volume reduction, etc) directly from a TurboModule, without modifying AudioNode internals or routing audio through the JS layer.  All processing occurs natively in C++ for optimal performance. This structure keeps the core engine untouched while offering flexible runtime control for external processors.

* Integrate external audio processor into AudioNode

Updated AudioNode::processAudio to optionally route raw buffer data to an external processor, if one is registered. This enables native buffer-level DSP (e.g., gating, eq, 3rd party DSP, things that may not be offer directly with react-native-audio-api) without modifying internal engine structures. The design supports full runtime control from TurboModules while preserving core stability.

All audio processing remains on the native side and bypasses JS execution for performance.

* Revert file to match upstream

* Cleanup: Removed incorrect utility and AudioNode edits

Removed externalCustomProcessor utility and reverted changes to AudioNode as suggested by the maintainer. Moving forward with implementing a proper Custom Node approach as recommended.

* Cleanup: Removed incorrect utility and AudioNode edits

Removed externalCustomProcessor utility and reverted changes to AudioNode as suggested by the maintainer. Moving forward with implementing a proper Custom Node approach as recommended.

* Feature: Add support for CustomProcessorNode

* Feature: Add support for CustomProcessorNode

* Feature: Add support for CustomProcessorNode

* Feature: Add support for CustomProcessorNode

* Feature: Add CustomProcessorNode; fix identifier, enum mode, and ProcessThrough memory

* fix: iOS restart engine after AVAudioSessionMediaServicesWereResetNotification (#513)

* fix: fixed lock screen skip commands for android above 12 (#514)

* fix: fixed lock screen skip commands for android above 12

* ci: yarn format

---------

Co-authored-by: Maciej Makowski <maciej.makowski2608@gmail.com>

* fix: defining memory pressure in host object (#515)

* fix: defining memory pressure in host object

* fix: cammel case

* chore: released 0.6.2 (#516)

Co-authored-by: Maciej Makowski <maciej.makowski2608@gmail.com>

* feat/fix: possibility of onended and positionChange events removal (#519)

* feat: possibility of onended and positionChange events removal

* fix: misplacement of sending events

* feat/fix: update to react native 0.80.0 and fix android (#511)

* feat: update to react native 0.80.0

* fix: use ResourceDrawableIdHelper as object

* refactor: use deprecated  property for backward compat

---------

Co-authored-by: Maciej Makowski <maciej.makowski2608@gmail.com>

* docs: updated compatibility table and web audio mapi coverage (#521)

Co-authored-by: Maciej Makowski <maciej.makowski2608@gmail.com>

---------

Co-authored-by: Maciej Makowski <maciej.makowski2608@gmail.com>
Co-authored-by: Viktor Shen <viktorshenofficial@gmail.com>
Co-authored-by: Michał Sęk <michal.sek@swmansion.com>
Co-authored-by: michal <dydmichal@gmail.com>
Co-authored-by: Michał Dydek <54865962+mdydek@users.noreply.github.com>
Co-authored-by: jerryseigle <mr.jerryseigle@gmail.com>
Co-authored-by: Max Potemkin <ptmknm@gmail.com>
Co-authored-by: Rami Elwan <ramielwan48@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
development Develop some feature or integrate it with sth feature New feature high priority
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants