Talking Avatar is an interactive, animated avatar that responds in real-time by overlaying SVG viseme images on a static image, synced with audio generated via the Azure Speech TTS service. The responses are powered by OpenAI, making the avatar both responsive and intelligent. By subscribing to the visemes event from Azure Speech, the avatar’s mouth movements are coordinated with speech, creating a natural talking effect.
- Real-time animated avatar: Syncs avatar mouth movements with speech using viseme events.
- Speech generation: Powered by Azure Speech Text-to-Speech (TTS) service.
- AI-driven responses: Utilizes OpenAI for generating interactive and dynamic conversations.
Before you begin, ensure you have the following:
- Node.js installed on your system
- Azure Speech API keys
- OpenAI API key
git clone https://github.com/AmoMTL/interactive-talking-avatar-vue-js-azure-speech-visemesNavigate to the backend directory:
cd backendInstall the required dependencies:
npm installCreate a .env file in the backend directory and add your API keys:
OPENAI_API_KEY=<your-openai-api-key>
AZURE_SPEECH_KEY=<your-azure-speech-api-key>
AZURE_SPEECH_REGION=<your-azure-region>
AZURE_VOICE_NAME=<azure-voice-name>Run the backend server:
node server.jsNavigate back to the root directory, then to the frontend folder:
cd ..
cd frontendInstall the frontend dependencies:
npm installStart the frontend:
npm run devFeel free to submit issues and pull requests if you have any ideas or improvements. Contributions are always welcome!
This project is licensed under the MIT License.