This guide explains how to deploy the Claude Code GPT-5 Proxy using Docker and GitHub Container Registry (GHCR).
The Docker image is available in GitHub Container Registry:
ghcr.io/teremterem/claude-code-gpt-5:latest
-
Copy
.env.templateto.env:cp .env.template .env
-
Edit
.envand add your OpenAI API key:OPENAI_API_KEY=your-openai-api-key-here # Optional (see .env.template for details): # LITELLM_MASTER_KEY=strong-key-that-you-generated # More settings (see .env.template for details) ...
-
Run the deployment script:
Run in the foreground:
./run-docker.sh
Alternatively, to run in the background:
./deploy-docker.sh
-
Check the logs (if you ran in the background):
docker logs -f claude-code-gpt-5
-
Start the service:
docker-compose up -d
NOTE: To run in the foreground, remove the
-dflag. -
Check the logs:
docker-compose logs -f
-
Run the container:
docker run -d \ --name claude-code-gpt-5 \ -p 4000:4000 \ --env-file .env \ --restart unless-stopped \ ghcr.io/teremterem/claude-code-gpt-5:latest
NOTE: To run in the foreground, remove the
-dflag.NOTE: You can also supply the environment variables individually via the
-eparameter, instead of--env-file .env -
Check the logs:
docker logs -f claude-code-gpt-5
Once the proxy is running, use it with Claude Code:
-
Install Claude Code (if not already installed):
npm install -g @anthropic-ai/claude-code
-
Use with GPT-5 via the proxy:
ANTHROPIC_BASE_URL=http://localhost:4000 claude
If you set a master key, pass it as the Anthropic API key for the CLI:
ANTHROPIC_API_KEY="<LITELLM_MASTER_KEY>" \ ANTHROPIC_BASE_URL=http://localhost:4000 \ claudeNOTE: In the latter case, if you've previously authenticated, run
claude /logoutfirst.
docker ps | grep claude-code-gpt-5docker stats claude-code-gpt-5docker stop claude-code-gpt-5docker rm claude-code-gpt-5NOTE:
./kill-docker.shcan be used to both stop and remove the container in one go.
docker-compose downThe container includes a health check endpoint:
curl http://localhost:4000/healthWARNING: LiteLLM's
/healthendpoint also checks the responsiveness of the deployed Language Models, which incurs extra costs !!! Keep this in mind if you decide to set up an automatic health check for your deployment.
If you need to build the image yourself, follow the instructions below.
NOTE: You still need to set up the
.envfile as described in the beginning of the Quick Start section.
-
First build the image:
docker build -t claude-code-gpt-5 . -
Then run the container:
docker run -d \ --name claude-code-gpt-5 \ -p 4000:4000 \ --env-file .env \ --restart unless-stopped \ claude-code-gpt-5
NOTE: To run in the foreground, remove the
-dflag.
Build and run by overlaying with the dev version of Compose setup:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d --buildThis will map the current directory to the container.
NOTE: To run in the foreground, remove the
-dflag.
- Check if port 4000 is available:
lsof -i :4000 - Verify environment variables are set correctly
- Check container logs:
docker logs -f claude-code-gpt-5
- Verify your API keys are valid and have sufficient credits
- Check if OpenAI requires identity verification for GPT-5 access (see README.md, section "First time using GPT-5 via API?")
- Ensure sufficient memory is available (recommended: 2GB+)
- Check network connectivity to OpenAI and Anthropic APIs
- Keep your API keys secure and never commit them to version control
- Use environment variables or Docker secrets for sensitive data
- Consider running the container in a restricted network environment
- Regularly update the image to get security patches
Claude Code CLI → LiteLLM Proxy (Port 4000) → OpenAI GPT-5 API
The proxy handles model routing and ensures compatibility between Claude Code's expectations and OpenAI's GPT-5 responses.