Skip to content

coslynx/AI-Powered-Request-Response-System

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

16 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AI Powered Request Response System

A Python backend application that serves as an AI wrapper, seamlessly connecting users to the power of OpenAI's language models.

Developed with the software and tools below.

Framework used Backend Language Database used LLMs used
git-last-commit GitHub commit activity GitHub top language

πŸ“‘ Table of Contents

  • πŸ“ Overview
  • πŸ“¦ Features
  • πŸ“‚ Structure
  • πŸ’» Installation
  • πŸ—οΈ Usage
  • 🌐 Hosting
  • πŸ“„ License
  • πŸ‘ Authors

πŸ“ Overview

The repository contains a Minimum Viable Product (MVP) called "AI Powered Request Response System" that acts as a Python backend application, bridging the gap between human communication and advanced AI technologies. Users can send requests to the system via a defined API endpoint, which then processes them using the OpenAI API and returns a comprehensive response. The system is built with scalability, security, and user experience in mind, using technologies like FastAPI, SQLAlchemy, and PostgreSQL.

πŸ“¦ Features

Feature Description
βš™οΈ Architecture The system utilizes a REST API architecture based on FastAPI for handling requests and responses. SQLAlchemy is used for database interaction with PostgreSQL.
πŸ“„ Documentation This README file provides a detailed overview of the MVP, its features, installation, usage, and deployment instructions.
πŸ”— Dependencies The project utilizes packages such as FastAPI, SQLAlchemy, psycopg2-binary, OpenAI, and Pydantic for API development, database interaction, and data validation.
🧩 Modularity The codebase is organized into modules for better organization and maintainability, with separate files for models, routers, and utilities.
πŸ§ͺ Testing Includes unit tests using pytest to ensure the robustness and reliability of the codebase.
⚑️ Performance The backend is designed for efficient processing of user requests and returns responses promptly. Caching strategies are implemented for frequently asked questions.
πŸ” Security The backend utilizes robust authentication and authorization protocols. Input validation and data sanitization are implemented to prevent security vulnerabilities.
πŸ”€ Version Control Utilizes Git for version control with a startup.sh script for managing the application startup process.
πŸ”Œ Integrations The backend seamlessly integrates with the OpenAI API, securely communicating with it to process user requests and retrieve responses.
πŸ“Ά Scalability The system is designed for scalability with the use of PostgreSQL for data storage and efficient request handling techniques.

πŸ“‚ Structure

β”œβ”€β”€ main.py                # Application entry point
β”œβ”€β”€ database.py            # Database setup and session management
β”œβ”€β”€ models
β”‚   └── models.py          # Database models
β”œβ”€β”€ routers
β”‚   └── requests.py        # API routes for handling user requests
β”œβ”€β”€ utils
β”‚   └── helpers.py         # Utility functions
β”œβ”€β”€ services
β”‚   └── openai_service.py  # OpenAI API interaction logic
└── tests
    └── test_main.py      # Unit tests for the main application logic

πŸ’» Installation

πŸ”§ Prerequisites

  • Python 3.9+
  • PostgreSQL 14+
  • Docker (optional, for containerized deployment)

πŸš€ Setup Instructions

  1. Clone the repository:
    git clone https://github.com/coslynx/AI-Powered-Request-Response-System.git
    cd AI-Powered-Request-Response-System
  2. Install dependencies:
    pip install -r requirements.txt
  3. Set up the database:
    • Create a PostgreSQL database.
    • Update the DATABASE_URL in the .env file with your database connection string.
  4. Configure environment variables:
    cp .env.example .env
    Replace sk-YOUR_API_KEY_HERE with your actual OpenAI API key.

πŸ—οΈ Usage

πŸƒβ€β™‚οΈ Running the MVP

  1. Start the application:
    python main.py

βš™οΈ Configuration

  • The .env file contains environment variables like the OpenAI API key and database connection string.

πŸ“š Examples

  • Sending a request:
    curl -X POST http://localhost:8000/requests -H "Content-Type: application/json" -d '{"text": "What is the meaning of life?"}'
  • Response:
    {
        "id": 1,
        "text": "What is the meaning of life?",
        "response": "The meaning of life is a question that has been pondered by philosophers and theologians for centuries. There is no one definitive answer, and the meaning of life may be different for each individual. Some people find meaning in their relationships, their work, their faith, or their hobbies. Ultimately, the meaning of life is up to each individual to decide.",
        "created_at": "2023-12-18T15:10:10.123456Z"
    }

🌐 Hosting

πŸš€ Deployment Instructions

  1. Build a Docker image (optional):
    docker build -t ai-request-response-system:latest .
  2. Run the Docker container (optional):
    docker run -p 8000:8000 ai-request-response-system:latest
  3. Deploy to a cloud platform (e.g., Heroku):
    • Create a new Heroku app:
    heroku create ai-request-response-system-production
    • Set environment variables:
    heroku config:set OPENAI_API_KEY=your_openai_api_key
    heroku config:set DATABASE_URL=your_database_url
    • Deploy the code:
    git push heroku main

πŸ”‘ Environment Variables

  • OPENAI_API_KEY: Your OpenAI API key
  • DATABASE_URL: Your PostgreSQL database connection string (e.g., postgresql://user:password@host:port/database)

πŸ“œ API Documentation

πŸ” Endpoints

  • POST /requests
    • Description: Create a new request to the OpenAI API.
    • Request Body:
      {
          "text": "Your request here"
      }
    • Response:
      {
          "id": 1,
          "text": "Your request here",
          "response": "The response from OpenAI",
          "created_at": "2023-12-18T15:10:10.123456Z"
      }

πŸ“œ License & Attribution

πŸ“„ License

This Minimum Viable Product (MVP) is licensed under the GNU AGPLv3 license.

πŸ€– AI-Generated MVP

This MVP was entirely generated using artificial intelligence through CosLynx.com.

No human was directly involved in the coding process of the repository: AI-Powered-Request-Response-System

πŸ“ž Contact

For any questions or concerns regarding this AI-generated MVP, please contact CosLynx at:

🌐 CosLynx.com

Create Your Custom MVP in Minutes With CosLynxAI!