A basic AI agent for DeFi yield optimization deployed on Marlin's Oyster TEE (Trusted Execution Environment) enclaves. This agent provides intelligent portfolio analysis, yield comparison, risk assessment, and scenario simulation with conversational context for enhanced user experience.
- Portfolio Analysis: Analyze your DeFi portfolio using specialized tools
- APY Comparison: Calculate and compare Annual Percentage Yields across multiple protocols
- Risk Assessment: Multi-factor risk analysis for informed decision making
- Yield Simulation: Simulate yield scenarios over time periods
- Conversational Memory: Maintains conversation context for better user experience
- TEE Security: Runs in secure, verifiable Marlin Oyster enclaves
- Multi-Architecture Support: Compatible with both AMD64 and ARM64 platforms
This application leverages Marlin's Oyster CVM (Confidential VM) to provide:
- Verifiable Computing: TEE-based execution ensures computation integrity
- Confidential Computing: Sensitive financial data remains protected during processing
- Decentralized Infrastructure: Runs on Marlin's distributed node network
- Remote Attestation: Cryptographic proofs verify execution authenticity
- Python 3.x
- Docker with buildx support
- Docker Hub account (for publishing images)
- Marlin Oyster CLI (
oyster-cvm
) - Private key for deployment wallet
If you prefer not to build locally or use a custom Docker image, jump directly to Step 3 for deployment.
-
Clone the Repository
git clone https://github.com/marlinprotocol/DeFi-AI-Agent.git cd DeFi-AI-Agent
-
Set Up Python Environment
# Install python3-venv sudo apt install python3.12-venv # Create virtual environment python3 -m venv venv # Activate virtual environment source venv/bin/activate # Linux/Mac
-
Configure Environment Variables
# Copy the example environment file cp .env.example .env # Edit .env with your configuration nano .env # or use your preferred editor
-
Install Dependencies
pip install -r requirements.txt
-
Test Locally
# Run the application python app.py # In a new terminal, test the connection nc 127.0.0.1 8080
# Build the image for your system architectures
sudo docker build -t <username>/ai-agent:latest .
# Push the image to Docker Hub
sudo docker push <username>/ai-agent:latest
[Optional] Build and Push Multi-Architecture Docker Image
# Build for both AMD64 and ARM64 architectures
docker buildx build --platform linux/amd64,linux/arm64 \
-t <your-username>/ai-agent:latest --push .
Edit the docker-compose.yml
file to reference your published Docker image:
services:
defi-yield-optimizer:
image: <your-username>/ai-agent:latest
# ... rest of configuration
oyster-cvm deploy \
--wallet-private-key <Your_Private_Key> \
--duration-in-minutes 20 \
--docker-compose docker-compose.yml \
--init-params ".env:1:1:file:./.env" \
--instance-type c6g.xlarge
oyster-cvm deploy \
--wallet-private-key <Your_Private_Key> \
--duration-in-minutes 20 \
--docker-compose docker-compose.yml \
--arch amd64 \
--init-params ".env:1:1:file:./.env"
The --init-params
flag securely passes .env
file to the enclave during initialization. For detailed information, visit the Marlin Oyster documentation.
Upon successful deployment, you'll receive the IP address of the enclave running your AI Agent.
Connect to your deployed AI agent using netcat:
nc <Enclave_IP> 8080