Skip to content

An AI coding assistant built with Streamlit and a modular Python backend. It leverages Ollama and LangChain for private, on-device code generation, debugging, and complexity analysis across multiple languages. The project showcases skills in system architecture and building practical, self-contained developer tools.

Notifications You must be signed in to change notification settings

mohithk006/AIXCoder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AIXCoder: An AI-Powered Coding Assistant

Python Streamlit LangChain License: MIT

AIXCoder is an end-to-end, a web application designed to function as a comprehensive AI pair programmer. In an environment where developers often rely on cloud-based API services for AI assistance, this project provides a private, efficient, and cost-effective alternative by leveraging local Large Language Models (LLMs) through Ollama.

This tool was engineered to be a modular, multi-functional development assistant, capable of code generation, execution, debugging, and performance analysis across a variety of programming languages. The primary objective was to build a robust, self-contained application that demonstrates full-stack development principles and a strong understanding of modern AI integration.

📋 Table of Contents

Core Features

  • 🤖 Multi-Language Code Generation: Generates code in multiple languages (Python, Java, C++, JS, etc.) based on natural language prompts.
  • ▶️ Secure Code Execution: Executes user-provided or generated code within a secure, temporary environment and captures standard output and errors. Supports both interpreted and compiled languages.
  • 🐞 AI-Powered Debugging: Analyzes and corrects buggy code by leveraging the LLM's pattern recognition capabilities to provide a clean, rewritten implementation.
  • ⏱️ Performance & Complexity Analysis: Provides both theoretical and practical performance metrics, including the asymptotic time complexity (Big O notation) and the real-world compile time.
  • 🎤 Voice-to-Text Integration: Features a voice input module that transcribes spoken commands into text prompts, offering a hands-free interaction method.
  • 📂 File Ingestion System: Supports direct file uploads, including .zip archives, with automatic filtering for supported file types.

System Architecture

The application is built on a modular, client-server architecture where a Streamlit frontend communicates with a set of distinct backend modules, each responsible for a specific task.

  1. Frontend Interface (streamlit_app.py): The user interface is built with Streamlit, which manages the application's state, handles user inputs (text, voice, file uploads), and renders all outputs. It serves as the primary entry point for all user interactions.

  2. LLM Interface (ollama_interface.py): All interactions with the Large Language Model are centralized through this module. It uses the LangChain library to create a standardized interface with the locally-hosted Ollama server, ensuring that all AI-powered features are consistent and easily maintainable.

  3. Code Execution Engine (code_executor.py): To ensure security, this module creates an isolated temporary directory for each execution task. It dynamically writes the code to a file and uses Python's subprocess module to run it, correctly handling both single-command execution for interpreted languages and multi-stage compile-and-run commands for languages like Java and C++.

  4. Specialized Backend Modules:

    • debugger.py & time_complexity.py: These modules construct highly specific, engineered prompts tailored for their respective tasks before sending them to the LLM interface.
    • voice_input.py: Manages real-time audio capture and uses the SpeechRecognition library to transcribe the audio into text.
    • file_handler.py: Contains the logic for processing UploadedFile objects from Streamlit, with dedicated handling for extracting relevant files from .zip archives.

Technology Stack

This project utilizes a modern stack focused on local-first AI development and rapid application deployment.

Category Technology / Library
Web Framework Streamlit
Local LLM Server Ollama (Model: phi3:mini, codellama)
LLM Orchestration LangChain
Code Execution subprocess, tempfile
Voice Recognition SpeechRecognition, PyAudio

Local Setup & Installation

Follow these instructions to configure and run the project on your local machine.

Prerequisites

  • Python 3.9 or higher
  • pip (Python package installer)
  • Ollama Installed: The Ollama server must be installed and running.
  • Ollama Model: At least one LLM must be pulled. This project was tested with phi3:mini.
    ollama pull phi3:mini

Configuration Steps

  1. Clone the Repository

    git clone [https://github.com/your-username/AIXCoder.git](https://github.com/your-username/AIXCoder.git)
    cd AIXCoder
  2. Create and Activate a Virtual Environment

    # For Windows
    python -m venv venv
    venv\Scripts\activate
    
    # For macOS/Linux
    python3 -m venv venv
    source venv/bin/activate
  3. Install Dependencies Install all required Python packages from the requirements.txt file.

    pip install -r requirements.txt
  4. Run the Streamlit Application Ensure the Ollama server is running in the background. Then, launch the application using the following command:

    streamlit run streamlit_app.py

    The application will be accessible in your web browser, typically at http://localhost:8501.

Usage Guide

  1. Code Generation: Enter a natural language prompt into the main text area, select the target programming language, and click "Generate Code".
  2. Voice Input: Click "Start Listening" and speak your prompt clearly. The transcribed text will populate the text area.
  3. File Upload: Use the file uploader to select a local code file or a .zip archive. The contents will be loaded into the main text area.
  4. Execution & Analysis: With code present in the text area, utilize the "Run Code", "Debug Code", or "Analyze Complexity" buttons. Results for each action will appear in an expandable section below the buttons.

Project Analysis & Future Roadmap

This project successfully demonstrates the creation of a full-stack, AI-driven developer tool that operates entirely on a local machine. It serves as a strong proof-of-concept with significant potential for expansion.

Potential Enhancements:

  • REST API for Decoupled Services: Refactor the backend logic into a separate Flask or FastAPI application to create a REST API. This would decouple the frontend and backend, allowing other clients (like a VS Code extension) to use the service.
  • Containerization with Docker: Package the entire application and its dependencies into a Docker container for simplified, cross-platform deployment and scalability.
  • Database Integration: Implement a local SQLite database to persist user chat history and preferences across sessions, creating a more stateful user experience.
  • Advanced AI Features: Expand the feature set to include AI-driven unit test generation, code explanation, and language-to-language code translation.

About

An AI coding assistant built with Streamlit and a modular Python backend. It leverages Ollama and LangChain for private, on-device code generation, debugging, and complexity analysis across multiple languages. The project showcases skills in system architecture and building practical, self-contained developer tools.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published