An Obsidian plugin that allows you to chat with a wide variety of AI models using OpenRouter.
- Access to dozens of AI models including Claude, GPT, Gemini, Mistral, Llama, and more
- Free model support with clearly marked free options
- Model filtering and search by name or provider
- Web search capability for real-time information (supported by some models)
- Simple chat interface in the Obsidian sidebar
- Insert AI responses directly at cursor position in your notes
- Track response metrics for performance analysis
- Open Obsidian Settings
- Go to "Community plugins" and disable "Safe mode"
- Click "Browse" and search for "OpenRouter Chat"
- Install the plugin and enable it
- Download the latest release from the releases page
- Extract the ZIP file into your Obsidian vault's
.obsidian/plugins/
directory - Enable the plugin in Obsidian settings under "Community plugins"
- Get an API key from OpenRouter
- Open plugin settings and enter your API key
- Your API key stays on your device and is only used to authenticate with OpenRouter
- Choose your default model in the settings (Gemini 2.0 Flash is set as the default free option)
- Use the dropdown in the chat interface to switch between models for each conversation
- Navigate the models grouped by provider (OpenAI, Anthropic, Google, etc.)
- Use the search box to find models by name or provider
- Toggle "Free models only" to see only free options
- Click the refresh button to update the model list from OpenRouter
- Click the chat icon in the ribbon or use the command "Open OpenRouter Chat"
- Type your message in the input box
- Press Enter to send (or Shift+Enter for a new line)
- View the AI's response in the chat window
- Clear the chat with the "Clear Chat" button to start fresh
- Each assistant message includes a copy button (📋)
- Click the button to copy the entire message to your clipboard
- A checkmark (✓) will briefly appear to confirm the copy
- Toggle the web search button (🌐) to enable real-time information lookup
- Note: This feature is only supported by some models and may incur additional costs
- When enabled, the AI can search the web to provide up-to-date information
- Each response includes performance metrics
- View time to first token, total time, and token count
- Click the metrics button (📊) to see detailed information
- Use these metrics to compare performance between different models
The default model is set to Google Gemini 2.0 Flash (free), which offers a good balance of performance and accessibility for all users.
- Enter: Send message
- Shift+Enter: Add new line in the input
- Up Arrow: View response metrics (when available)
- Visit OpenRouter for more information about available models
- For plugin issues, please file a GitHub issue on this repository
- For API-related questions, please refer to the OpenRouter documentation
This project is licensed under the GPL-3.0 License.