Voice input for Claude Code using listen CLI tool with local Whisper transcription.
- Voice input instead of typing
- Multi-language support (es, en, fr, de, it, pt, zh, ja, ko)
- Multiple Whisper models (tiny to large)
- MCP server integration with slash commands
- Claude Code
- listen CLI
- Node.js v18+
- Working microphone
git clone https://github.com/gmoqa/listen.git
cd listen
pip install -r requirements.txt
# Ensure 'listen' is available in PATHOpen Claude Code and run:
/plugin marketplace add gmoqa/listen-claude-code
/plugin install claude-listen@gmoqa/listen-claude-code
That's it! The plugin will automatically:
- Install MCP server dependencies
- Configure the listen voice tool
- Add the
/listencommand
/listen
Starts voice recording. Press Ctrl+C when done speaking. Claude processes the transcription automatically.
Claude can also use the listen_voice tool directly when needed.
- User runs
/listen - MCP tool calls
listenCLI - User speaks, then presses Ctrl+C
- Whisper transcribes audio to text
- Claude processes the text as a normal request
cd test
./quick-test.sh # Fast validation
./auto-test.sh # Full test suite"listen command not found"
Add listen to PATH or set LISTEN_PATH in .mcp.json
"No module named 'whisper'"
pip install -r requirements.txt in listen directory
Microphone not working Check permissions in System Preferences → Privacy → Microphone
MIT - See LICENSE file
- listen by @gmoqa
- Whisper by OpenAI
- Claude Code MCP integration