AIXCoder is an end-to-end, a web application designed to function as a comprehensive AI pair programmer. In an environment where developers often rely on cloud-based API services for AI assistance, this project provides a private, efficient, and cost-effective alternative by leveraging local Large Language Models (LLMs) through Ollama.
This tool was engineered to be a modular, multi-functional development assistant, capable of code generation, execution, debugging, and performance analysis across a variety of programming languages. The primary objective was to build a robust, self-contained application that demonstrates full-stack development principles and a strong understanding of modern AI integration.
- Core Features
- System Architecture
- Technology Stack
- Local Setup & Installation
- Usage Guide
- Project Analysis & Future Roadmap
- 🤖 Multi-Language Code Generation: Generates code in multiple languages (Python, Java, C++, JS, etc.) based on natural language prompts.
▶️ Secure Code Execution: Executes user-provided or generated code within a secure, temporary environment and captures standard output and errors. Supports both interpreted and compiled languages.- 🐞 AI-Powered Debugging: Analyzes and corrects buggy code by leveraging the LLM's pattern recognition capabilities to provide a clean, rewritten implementation.
- ⏱️ Performance & Complexity Analysis: Provides both theoretical and practical performance metrics, including the asymptotic time complexity (Big O notation) and the real-world compile time.
- 🎤 Voice-to-Text Integration: Features a voice input module that transcribes spoken commands into text prompts, offering a hands-free interaction method.
- 📂 File Ingestion System: Supports direct file uploads, including
.zip
archives, with automatic filtering for supported file types.
The application is built on a modular, client-server architecture where a Streamlit frontend communicates with a set of distinct backend modules, each responsible for a specific task.
-
Frontend Interface (
streamlit_app.py
): The user interface is built with Streamlit, which manages the application's state, handles user inputs (text, voice, file uploads), and renders all outputs. It serves as the primary entry point for all user interactions. -
LLM Interface (
ollama_interface.py
): All interactions with the Large Language Model are centralized through this module. It uses theLangChain
library to create a standardized interface with the locally-hosted Ollama server, ensuring that all AI-powered features are consistent and easily maintainable. -
Code Execution Engine (
code_executor.py
): To ensure security, this module creates an isolated temporary directory for each execution task. It dynamically writes the code to a file and uses Python'ssubprocess
module to run it, correctly handling both single-command execution for interpreted languages and multi-stage compile-and-run commands for languages like Java and C++. -
Specialized Backend Modules:
debugger.py
&time_complexity.py
: These modules construct highly specific, engineered prompts tailored for their respective tasks before sending them to the LLM interface.voice_input.py
: Manages real-time audio capture and uses theSpeechRecognition
library to transcribe the audio into text.file_handler.py
: Contains the logic for processingUploadedFile
objects from Streamlit, with dedicated handling for extracting relevant files from.zip
archives.
This project utilizes a modern stack focused on local-first AI development and rapid application deployment.
Category | Technology / Library |
---|---|
Web Framework | Streamlit |
Local LLM Server | Ollama (Model: phi3:mini , codellama ) |
LLM Orchestration | LangChain |
Code Execution | subprocess , tempfile |
Voice Recognition | SpeechRecognition , PyAudio |
Follow these instructions to configure and run the project on your local machine.
- Python 3.9 or higher
pip
(Python package installer)- Ollama Installed: The Ollama server must be installed and running.
- Ollama Model: At least one LLM must be pulled. This project was tested with
phi3:mini
.ollama pull phi3:mini
-
Clone the Repository
git clone [https://github.com/your-username/AIXCoder.git](https://github.com/your-username/AIXCoder.git) cd AIXCoder
-
Create and Activate a Virtual Environment
# For Windows python -m venv venv venv\Scripts\activate # For macOS/Linux python3 -m venv venv source venv/bin/activate
-
Install Dependencies Install all required Python packages from the
requirements.txt
file.pip install -r requirements.txt
-
Run the Streamlit Application Ensure the Ollama server is running in the background. Then, launch the application using the following command:
streamlit run streamlit_app.py
The application will be accessible in your web browser, typically at
http://localhost:8501
.
- Code Generation: Enter a natural language prompt into the main text area, select the target programming language, and click "Generate Code".
- Voice Input: Click "Start Listening" and speak your prompt clearly. The transcribed text will populate the text area.
- File Upload: Use the file uploader to select a local code file or a
.zip
archive. The contents will be loaded into the main text area. - Execution & Analysis: With code present in the text area, utilize the "Run Code", "Debug Code", or "Analyze Complexity" buttons. Results for each action will appear in an expandable section below the buttons.
This project successfully demonstrates the creation of a full-stack, AI-driven developer tool that operates entirely on a local machine. It serves as a strong proof-of-concept with significant potential for expansion.
Potential Enhancements:
- REST API for Decoupled Services: Refactor the backend logic into a separate Flask or FastAPI application to create a REST API. This would decouple the frontend and backend, allowing other clients (like a VS Code extension) to use the service.
- Containerization with Docker: Package the entire application and its dependencies into a Docker container for simplified, cross-platform deployment and scalability.
- Database Integration: Implement a local SQLite database to persist user chat history and preferences across sessions, creating a more stateful user experience.
- Advanced AI Features: Expand the feature set to include AI-driven unit test generation, code explanation, and language-to-language code translation.