Car dealership AI agent that demonstrates how Redis Agent Memory Server enables long-term memory and conversation context retrieval, allowing the agent to remember customer preferences across sessions and provide personalized car purchase assistance.
- Demo Objectives
- Tech Stack
- Prerequisites
- Getting Started
- Architecture
- Project Structure
- Usage
- Docker Commands Reference
- Cloud Deployment
- Resources
- Maintainers
- License
- Long-term memory storage using Redis Agent Memory Server for persistent customer preferences
- Short-term/working memory using LangGraph checkpointers and Redis Agent Memory Server
- Conversation context retrieval for personalized interactions across sessions
- Agentic orchestration with LangGraph workflow stages (needs analysis → shortlist → test drive → financing)
| Layer | Technology | Purpose |
|---|---|---|
| Memory | Redis Agent Memory Server | Long-term and working memory management |
| Database | Redis Cloud | Vector storage and session persistence |
| Orchestration | LangGraph | Stateful workflow management |
| LLM Framework | LangChain | LLM integration and prompting |
| Backend | FastAPI | Python REST API |
| Frontend | React 18 + TypeScript | User interface |
| Styling | Tailwind CSS | UI styling |
| LLM | OpenAI GPT-4 | Language model |
| Deployment | Docker + Terraform | Containerization and cloud infrastructure |
- Python 3.11+
- Node.js 18+
- Docker and Docker Compose
- Redis Cloud account or local Redis instance
- OpenAI API key
git clone <repository-url>
cd dealership-chatbot-agent-memory-demoCreate a .env file in the project root:
OPENAI_API_KEY=your_openai_api_key_here
REDIS_URL=redis://default:password@your-redis-host:port
MEMORY_SERVER_URL=http://localhost:8000Get the pre-built Docker image from Docker Hub:
docker run -p 8000:8000 \
-e REDIS_URL=redis://default:<password>@<your-redis-host>:<port> \
-e OPENAI_API_KEY=<your-openai-api-key> \
redislabs/agent-memory-server:latest \
agent-memory api --host 0.0.0.0 --port 8000 --task-backend=asyncioNote: This command starts the Agent Memory Server API with asyncio task backend. You must have a running Redis instance (e.g., Redis Cloud) accessible at the URL you provide.
Build and start all services:
docker-compose up --buildAccess the application:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8001
Backend:
cd backend
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
python main.pyFrontend:
cd frontend
npm install
npm run devUser Query
↓
[Retrieve Conversation Context] → Load past preferences from long-term memory
↓
[Parse Slots] → Extract car preferences using LLM
↓
[Ensure Readiness] → Check if all required slots are filled
↓
[Decide Next]
├→ Missing slots? → Ask follow-up question
└→ All slots filled? → Advance to next stage
↓
[Workflow Stages]
├→ Brand Selected? → Suggest Models
├→ Model Selected? → Suggest Test Drive
├→ Test Drive Completed? → Suggest Financing
└→ Financing Discussed? → Prepare for Delivery
↓
[Save to Memory] → Store conversation and preferences
↓
Response to User
dealership-chatbot-agent-memory-demo/
├── backend/
│ ├── main.py # FastAPI application
│ ├── orchestrator.py # LangGraph workflow
│ └── requirements.txt # Python dependencies
├── frontend/
│ ├── src/
│ │ ├── components/ # React components
│ │ └── contexts/ # React contexts
│ ├── package.json
│ └── nginx.conf # Production server config
├── docker/
│ ├── Dockerfile.backend
│ └── Dockerfile.frontend
├── terraform/
│ ├── main.tf # AWS infrastructure
│ ├── variables.tf # Variable definitions
│ ├── outputs.tf # Output definitions
│ └── user_data.sh # EC2 bootstrap script
├── docker-compose.yml
└── README.md
- Start a conversation by logging in with any username
- Share your preferences (e.g., "I'm looking for a 5-seater SUV")
- Browse recommendations based on your requirements
- Select a model and schedule a test drive
- Complete the journey through financing options
The agent remembers your preferences across sessions, so returning customers get personalized recommendations immediately.
Deploy to AWS EC2 using Terraform.
Prerequisites:
- AWS account with credentials configured
- Terraform installed (>= 1.0)
- SSH key pair in AWS EC2
Quick Start:
cd terraform
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars with your values
terraform init
terraform plan
terraform applyFull deployment guide: See terraform/README.md for detailed instructions.
- Redis Agent Memory Server
- LangGraph Documentation
- LangChain Documentation
- FastAPI Documentation
- Redis Cloud
- Bhavana Giri — @bhavanagiri
This project is licensed under the MIT License - see the LICENSE file for details.


