Skip to content

This project was developed as for the application section of "Offchain computing using TEE coprocessors" course.

Notifications You must be signed in to change notification settings

marlinprotocol/DeFi-AI-Agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DeFi AI Agent

A basic AI agent for DeFi yield optimization deployed on Marlin's Oyster TEE (Trusted Execution Environment) enclaves. This agent provides intelligent portfolio analysis, yield comparison, risk assessment, and scenario simulation with conversational context for enhanced user experience.

🚀 Features

  • Portfolio Analysis: Analyze your DeFi portfolio using specialized tools
  • APY Comparison: Calculate and compare Annual Percentage Yields across multiple protocols
  • Risk Assessment: Multi-factor risk analysis for informed decision making
  • Yield Simulation: Simulate yield scenarios over time periods
  • Conversational Memory: Maintains conversation context for better user experience
  • TEE Security: Runs in secure, verifiable Marlin Oyster enclaves
  • Multi-Architecture Support: Compatible with both AMD64 and ARM64 platforms

🏗️ Architecture

This application leverages Marlin's Oyster CVM (Confidential VM) to provide:

  • Verifiable Computing: TEE-based execution ensures computation integrity
  • Confidential Computing: Sensitive financial data remains protected during processing
  • Decentralized Infrastructure: Runs on Marlin's distributed node network
  • Remote Attestation: Cryptographic proofs verify execution authenticity

📋 Prerequisites

  • Python 3.x
  • Docker with buildx support
  • Docker Hub account (for publishing images)
  • Marlin Oyster CLI (oyster-cvm)
  • Private key for deployment wallet

🔧 Installation and Setup

Option 1: Quick Deployment (Skip to Step 3)

If you prefer not to build locally or use a custom Docker image, jump directly to Step 3 for deployment.

Option 2: Local Development and Testing

  1. Clone the Repository

    git clone https://github.com/marlinprotocol/DeFi-AI-Agent.git
    cd DeFi-AI-Agent
  2. Set Up Python Environment

    # Install python3-venv
    sudo apt install python3.12-venv
    
    # Create virtual environment
    python3 -m venv venv
    
    # Activate virtual environment
    source venv/bin/activate  # Linux/Mac
  3. Configure Environment Variables

    # Copy the example environment file
    cp .env.example .env
    
    # Edit .env with your configuration
    nano .env  # or use your preferred editor
  4. Install Dependencies

    pip install -r requirements.txt
  5. Test Locally

    # Run the application
    python app.py
    
    # In a new terminal, test the connection
    nc 127.0.0.1 8080

🐳 Docker Deployment

Build and Push Docker Image

# Build the image for your system architectures
sudo docker build -t <username>/ai-agent:latest .
# Push the image to Docker Hub
sudo docker push <username>/ai-agent:latest

[Optional] Build and Push Multi-Architecture Docker Image

# Build for both AMD64 and ARM64 architectures
docker buildx build --platform linux/amd64,linux/arm64 \
  -t <your-username>/ai-agent:latest --push .

Update Docker Compose Configuration

Edit the docker-compose.yml file to reference your published Docker image:

services:
  defi-yield-optimizer:
    image: <your-username>/ai-agent:latest
    # ... rest of configuration

🚀 Deploy on Oyster CVM

ARM64 Architecture Deployment

oyster-cvm deploy \
  --wallet-private-key <Your_Private_Key> \
  --duration-in-minutes 20 \
  --docker-compose docker-compose.yml \
  --init-params ".env:1:1:file:./.env" \
  --instance-type c6g.xlarge

AMD64 Architecture Deployment

oyster-cvm deploy \
  --wallet-private-key <Your_Private_Key> \
  --duration-in-minutes 20 \
  --docker-compose docker-compose.yml \
  --arch amd64 \
  --init-params ".env:1:1:file:./.env"

Understanding Init Params

The --init-params flag securely passes .env file to the enclave during initialization. For detailed information, visit the Marlin Oyster documentation.

Successful Deployment

Upon successful deployment, you'll receive the IP address of the enclave running your AI Agent.

💬 Interacting with the AI Agent

Connect to your deployed AI agent using netcat:

nc <Enclave_IP> 8080

About

This project was developed as for the application section of "Offchain computing using TEE coprocessors" course.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published