-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Open
Description
Why
- Zero local toolchain fuss
A Docker image can bundle Claude Code, Node, Python,pip
,npm
, etc. so every dev machine — laptop, CI runner, or cloud VM — sees the same stack. No more “works on my machine” when a teammate upgrades Node. - Run anywhere you have a Docker daemon
PointDOCKER_HOST
(or a Docker Context) at an on-prem box, a beefy EC2/GCE VM, or even a Kubernetes worker. Your local Claudia UI streams stdout/stderr exactly as today, but the CPU/RAM live remotely.
What
- Add an
executionMode: "process" | "container"
setting. DockerExecutor
wraps the currentCommand
interface:
docker run --rm \
-v "${projectPath}:/workspace" -w /workspace \
-e CLAUDE_API_KEY \
asterisk/claude-cli:<tag> <args>
- Images can be per-project (
docker build -f Dockerfile.claude .
) or a shared tag.
Nice side-effects
- Windows friendliness – users still need Docker Desktop (WSL 2/Hyper-V backend) but no manual WSL install of Claude, Node, etc.
- Reproducible builds – image tag pins language/tool versions.
- Security – combines Claudia’s existing sandbox with the kernel isolation of containers.
Sandbox ≠ Dependencies
Claudia’s sandbox hardens file/network access (seccomp, ACLs) but does not freeze tool versions; containers compliment that by pinning runtimes
Implementation
I guess the most practical approach would be to:
- Create a Docker abstraction layer that implements the same interface as the current command execution
- Modify the command creation functions to use Docker API instead of direct process spawning
- Handle file project access, as need also remote docker, volume mount is not enogh
- Adapt the streaming output handling to work with containerized processes
Does it make sense?
git-git-hurrah, RagingKore, averev, danialranjha, Cyberbolt and 3 more
Metadata
Metadata
Assignees
Labels
No labels