|
| 1 | +[**English**](README.md) | [简体中文](README.zh.md) |
| 2 | + |
| 3 | +# Code Atlas |
| 4 | + |
| 5 | +**Code Atlas** is a powerful C++ application inspired by [Open Interpreter](https://github.com/OpenInterpreter/open-interpreter). |
| 6 | + |
| 7 | +As an intelligent local agent, it can directly execute Python, PowerShell, and Batch scripts on your machine. Integrated with large language models (LLMs), it provides an interactive programming assistant that understands natural language requests and executes code to accomplish tasks. |
| 8 | + |
| 9 | +**Currently developed and tested on Windows. Linux and macOS support is in progress.** |
| 10 | + |
| 11 | +## ✨ Features |
| 12 | + |
| 13 | +* **🤖 Local AI Agent**: Fully runs on your machine, no external API dependency |
| 14 | +* **💬 Cloud AI Optional**: Can optionally connect to OpenAI-compatible APIs |
| 15 | +* **🐍 Multi-language Execution**: Supports Python, PowerShell, and Batch scripts |
| 16 | +* **🔄 Persistent Sessions**: Maintains Python interpreter state across executions |
| 17 | +* **🚀 Local LLM Integration**: Works with `llama.cpp` server for fast and private inference |
| 18 | +* **⚡ Real-time Interaction**: Interactive CLI with streaming responses |
| 19 | +* **🛡️ Secure & Private**: All processing is local – your code never leaves your machine |
| 20 | +* **🔧 Configurable**: Flexible JSON-based configuration system |
| 21 | +* **🌐 Cross-platform**: Primarily supports Windows, full Linux/macOS support coming soon |
| 22 | + |
| 23 | +## 📋 Prerequisites |
| 24 | + |
| 25 | +### System Requirements |
| 26 | + |
| 27 | +* **Operating System**: Windows 10/11 (primary), Linux or macOS |
| 28 | +* **CPU**: Modern x64 processor (CUDA-capable GPU recommended for faster inference) |
| 29 | +* **Memory**: Minimum 8GB (16GB+ recommended for large models) |
| 30 | +* **Storage**: At least 10GB free space for models and dependencies |
| 31 | + |
| 32 | +### Dependencies |
| 33 | + |
| 34 | +* **CMake** 3.16 or newer |
| 35 | +* **C++17-compatible compiler** (GCC, Clang, or MSVC) |
| 36 | +* **Python 3.x** with development headers |
| 37 | +* **Git** |
| 38 | + |
| 39 | +#### Windows (MSYS2/MinGW64) |
| 40 | + |
| 41 | +```bash |
| 42 | +# Install MSYS2 from https://www.msys2.org/ |
| 43 | +# Use the mingw64 environment |
| 44 | +pacman -Syu |
| 45 | +pacman -Su |
| 46 | +# After restarting, install dependencies: |
| 47 | +pacman -S --needed \ |
| 48 | + mingw-w64-x86_64-toolchain \ |
| 49 | + mingw-w64-x86_64-cmake \ |
| 50 | + mingw-w64-x86_64-cpr \ |
| 51 | + mingw-w64-x86_64-nlohmann-json \ |
| 52 | + mingw-w64-x86_64-python |
| 53 | +``` |
| 54 | + |
| 55 | +## 🚀 Getting Started |
| 56 | + |
| 57 | +You can directly download prebuilt executables from the [Releases](https://github.com/ystemsrx/Code-Atlas/releases) page. |
| 58 | + |
| 59 | +Or build from source: |
| 60 | + |
| 61 | +### 1. Clone the Repository |
| 62 | + |
| 63 | +```bash |
| 64 | +git clone --depth 1 https://github.com/ystemsrx/Code-Atlas.git |
| 65 | +cd code-atlas |
| 66 | +``` |
| 67 | + |
| 68 | +### 2. Build the Project |
| 69 | + |
| 70 | +```bash |
| 71 | +mkdir build |
| 72 | +cd build |
| 73 | +cmake .. -G "MinGW Makefiles" |
| 74 | +cmake --build . |
| 75 | +``` |
| 76 | + |
| 77 | +### 3. Configure the Application |
| 78 | + |
| 79 | +Edit `config.json` to configure your LLM settings: |
| 80 | + |
| 81 | +```json |
| 82 | +{ |
| 83 | + "api": { |
| 84 | + "base_url": "https://api.openai.com/v1/chat/completions", |
| 85 | + "key": "sk-..." |
| 86 | + }, |
| 87 | + "model": { |
| 88 | + "name": "gpt-4o", |
| 89 | + "parameters": { |
| 90 | + "temperature": 0.2, |
| 91 | + "top_p": 0.9, |
| 92 | + "max_tokens": 4096, |
| 93 | + "frequency_penalty": 0.0, |
| 94 | + "presence_penalty": 0.6 |
| 95 | + } |
| 96 | + } |
| 97 | +} |
| 98 | +``` |
| 99 | + |
| 100 | +### 4. Launch LLM Server (Optional) |
| 101 | + |
| 102 | +If you're using the included [llama.cpp](https://github.com/ggml-org/llama.cpp) server: |
| 103 | + |
| 104 | +```bash |
| 105 | +llama-server --jinja -fa -m model.gguf |
| 106 | + |
| 107 | +# Or download a model from HuggingFace and run: |
| 108 | +llama-server --jinja -fa -hf user/model.gguf |
| 109 | +``` |
| 110 | + |
| 111 | +> ⚠️ Model-specific differences may apply due to function-calling compatibility. See [llama.cpp/function-calling.md](https://github.com/ggml-org/llama.cpp/blob/master/docs/function-calling.md) for details. |
| 112 | +
|
| 113 | +### 5. Run Code Atlas |
| 114 | + |
| 115 | +```bash |
| 116 | +./code-atlas |
| 117 | +``` |
| 118 | + |
| 119 | +## 💡 Usage Examples |
| 120 | + |
| 121 | +### Basic Interactions |
| 122 | + |
| 123 | +> Calculate factorial of 10 |
| 124 | +>  |
| 125 | +
|
| 126 | +> List all running processes on Windows |
| 127 | +>  |
| 128 | +
|
| 129 | +> Create/rename files |
| 130 | +>  |
| 131 | +
|
| 132 | +## 🔧 Configuration |
| 133 | + |
| 134 | +### Config File Structure |
| 135 | + |
| 136 | +The config template is located in the root directory as `config_template.json`. Copy it to create your working configuration: |
| 137 | + |
| 138 | +```bash |
| 139 | +cp config-template.json config.json |
| 140 | +``` |
| 141 | + |
| 142 | +The `config.json` file controls all behaviors of Code Atlas: |
| 143 | + |
| 144 | +```json |
| 145 | +{ |
| 146 | + "system": { |
| 147 | + "prompt": "You are ..." |
| 148 | + }, |
| 149 | + "model": { |
| 150 | + "name": "Model name", |
| 151 | + "parameters": { |
| 152 | + "temperature": 0.2, |
| 153 | + "top_p": 0.9, |
| 154 | + "max_tokens": 4096, |
| 155 | + "frequency_penalty": 0.0, |
| 156 | + "presence_penalty": 0.6 |
| 157 | + } |
| 158 | + }, |
| 159 | + "api": { |
| 160 | + "base_url": "http://localhost:8080/v1/chat/completions", |
| 161 | + "key": "Required if using a cloud API" |
| 162 | + } |
| 163 | +} |
| 164 | +``` |
| 165 | + |
| 166 | +### Tool Configuration |
| 167 | + |
| 168 | +Code Atlas supports three execution environments: |
| 169 | + |
| 170 | +* **Python**: A persistent IPython-like interpreter |
| 171 | +* **PowerShell**: Native PowerShell script execution on Windows |
| 172 | +* **Batch**: Windows CMD/batch script execution |
| 173 | + |
| 174 | +## 🤝 Contributing |
| 175 | + |
| 176 | +We welcome suggestions and contributions. You can participate by: |
| 177 | + |
| 178 | +* Opening Issues |
| 179 | +* Submitting Pull Requests |
| 180 | +* Sharing your use cases and feedback |
| 181 | + |
| 182 | +## 🐛 Troubleshooting |
| 183 | + |
| 184 | +### Common Issues |
| 185 | + |
| 186 | +**Build Failures**: |
| 187 | + |
| 188 | +* Ensure all dependencies are installed correctly |
| 189 | +* Check CMake version compatibility (3.16+) |
| 190 | +* Verify Python development headers are available |
| 191 | + |
| 192 | +**Runtime Errors**: |
| 193 | + |
| 194 | +* Check syntax and path of `config.json` |
| 195 | +* Ensure LLM server is running and accessible |
| 196 | +* Validate Python interpreter embedding is functional |
| 197 | + |
| 198 | +**Performance Issues**: |
| 199 | + |
| 200 | +* Consider GPU acceleration for LLM inference |
| 201 | +* Tune model parameters (e.g., `temperature`, `max_tokens`) |
| 202 | +* Monitor system resources during execution |
| 203 | + |
| 204 | +## 📄 License |
| 205 | + |
| 206 | +This project is licensed under the MIT License – see the [LICENSE](LICENSE) file for details. |
| 207 | + |
| 208 | +## 🙏 Acknowledgements |
| 209 | + |
| 210 | +* **llama.cpp** – For robust local LLM inference |
| 211 | +* **nlohmann/json** – High-quality JSON parsing for C++ |
| 212 | +* **cpr** – Simple and effective HTTP client library |
| 213 | +* **Python** – Embedded interpreter powering the Python backend |
| 214 | + |
| 215 | +--- |
| 216 | + |
| 217 | +**⚠️ Security Warning**: |
| 218 | +Code Atlas executes code directly on your system. Use only with trusted models and in secure environments. Always review the generated code before running in production. |
0 commit comments