Skip to content

olumolu/consoleAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 

Repository files navigation

consoleAI

A versatile script that provides a command-line interface (CLI) for interacting with various Large Language Models (LLMs) from multiple inference providers. It supports model selection, conversation history, system prompts, and real-time streaming of responses, filter from the models and off|on tool calling. Run this from any macos or linux even from android with turmux.

Features

  • Multi-Provider Support: Chat with models from:
    • Google Gemini
    • OpenRouter (access to multi model providers)
    • Groq
    • Together AI
    • Cerebras AI
    • Novita AI
  • Dynamic Model Selection: Fetches and lists available models from the chosen provider, allowing you to select one interactively.
  • Conversation History: Remembers the last N messages (configurable) to maintain context. /history to recall conversation logs.
  • System Prompt: Define a system-level instruction for the AI (configurable).
  • Streaming Responses: AI responses are streamed token by token for a real-time feel.
  • Color-Coded Output: Differentiates between user input, AI responses, errors, and info messages. ... in different colour.
  • Minimal Dependencies: Requires only bash, curl, bc, and jq.
  • Easy Configuration: API keys and core settings are managed directly within the script.
  • Tool Calling: Tool Calling added into gemini with prompt to enable.
  • search filter: Filter support added [filter] ... (e.g., ./ai.sh openrouter 32b or ./ai.sh gemini pro)
  • Session management commands: /save , /load , /clear .
  1. Download the Script: Clone the repository or download ai.sh to your local machine.
    git clone https://github.com/olumolu/consoleAI.git
    cd consoleAI
    Or just download the ai.sh file.

Important

2. configure API Keys:** You MUST add your API keys to the script. Open ai.sh in a text editor and locate the API key section:

```bash
GEMINI_API_KEY=""
OPENROUTER_API_KEY=""
GROQ_API_KEY=""
TOGETHER_API_KEY=""
CEREBRAS_API_KEY=""
NOVITA_API_KEY=""
```
  1. Make it Executable:

    chmod +x ai.sh
  2. To Run This

To start interacting with a specific AI provider, execute the script from your terminal followed by the provider's name. Here are the supported commands:

./ai.sh gemini
./ai.sh groq
./ai.sh together
./ai.sh openrouter
./ai.sh cerebras
./ai.sh novita

Choose any model from any provider just by selecting the Number mentioned before the model name from the list of available models.

Important

5. (Optional) Adjust Default Settings:

You can customize other settings near the top of the script:

  • MAX_HISTORY_MESSAGES: Number of past messages (user + AI) to keep in history.
  • DEFAULT_OAI_TEMPERATURE, DEFAULT_OAI_MAX_TOKENS, DEFAULT_OAI_TOP_P: Parameters for OpenAI-compatible APIs.
  • SYSTEM_PROMPT: The default system-level instruction for the AI. Set to "" to disable.
  • SESSION_DIR : Change location for saving session with /save.

Note

Out of scope

  • Image genaration is out of scope as it is a terminal app.

About

A Terminal-based AI chat with multiple LLM providers

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages