Skip to content

Commit ef34693

Browse files
authored
refactor(a2a): use OpenAI by default (#75)
- Use OpenAI by default - Allow using DMR with a dedicated compose file - Support Offload with an additional compose file
1 parent 23e1784 commit ef34693

File tree

6 files changed

+99
-66
lines changed

6 files changed

+99
-66
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,13 +32,13 @@ docker compose -f compose.yaml -f compose.openai.yaml up
3232
# Compose for Agents Demos - Classification
3333

3434
| Demo | Agent System | Models | MCPs | project | compose |
35-
| ---- | ---- | ---- | ---- | ---- | ---- |
35+
| ---- | ---- | ---- | ---- | ---- | ---- |]<<<<<<< HEAD
36+
| [A2A](https://github.com/a2a-agents/agent2agent) Multi-Agent Fact Checker | Multi-Agent | OpenAI | duckduckgo | [./a2a](./a2a) | [compose.yaml](./a2a/compose.yaml) |
3637
| [Agno](https://github.com/agno-agi/agno) agent that summarizes GitHub issues | Multi-Agent | qwen3(local) | github-official | [./agno](./agno) | [compose.yaml](./agno/compose.yaml) |
3738
| [Vercel AI-SDK](https://github.com/vercel/ai) Chat-UI for mixing MCPs and Model | Single Agent | llama3.2(local), qwen3(local) | wikipedia-mcp, brave, resend(email) | [./vercel](./vercel) | [compose.yaml](https://github.com/slimslenderslacks/scira-mcp-chat/blob/main/compose.yaml) |
3839
| [CrewAI](https://github.com/crewAIInc/crewAI) Marketing Strategy Agent | Multi-Agent | qwen3(local) | duckduckgo | [./crew-ai](./crew-ai) | [compose.yaml](https://github.com/docker/compose-agents-demo/blob/main/crew-ai/compose.yaml) |
3940
| [ADK](https://github.com/google/adk-python) Multi-Agent Fact Checker | Multi-Agent | gemma3-qat(local) | duckduckgo | [./adk](./adk) | [compose.yaml](./adk/compose.yaml) |
4041
| [ADK](https://github.com/google/adk-python) & [Cerebras](https://www.cerebras.ai/) Golang Experts | Multi-Agent | unsloth/qwen3-gguf:4B-UD-Q4_K_XL & ai/qwen2.5:latest (DMR local), llama-4-scout-17b-16e-instruct (Cerebras remote) | | [./adk-cerebras](./adk-cerebras) | [compose.yml](./adk-cerebras/compose.yml) |
41-
| [A2A](https://github.com/a2a-agents/agent2agent) Multi-Agent Fact Checker | Multi-Agent | gemma3(local) | duckduckgo | [./a2a](./a2a) | [compose.yaml](./a2a/compose.yaml) |
4242
| [LangGraph](https://github.com/langchain-ai/langgraph) SQL Agent | Single Agent | qwen3(local) | postgres | [./langgraph](./langgraph) | [compose.yaml](./langgraph/compose.yaml) |
4343
| [Embabel](https://github.com/embabel/embabel-agent) Travel Agent | Multi-Agent | qwen3, Claude3.7, llama3.2, jimclark106/all-minilm:23M-F16 | brave, github-official, wikipedia-mcp, weather, google-maps, airbnb | [./embabel](./embabel) | [compose.yaml](https://github.com/embabel/travel-planner-agent/blob/main/compose.yaml) and [compose.dmr.yaml](https://github.com/embabel/travel-planner-agent/blob/main/compose.dmr.yaml) |
4444
| [Spring AI](https://spring.io/projects/spring-ai) Brave Search | Single Agent | none | brave | [./spring-ai](./spring-ai) | [compose.yaml](./spring-ai/compose.yaml) |

a2a/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ if test -f /run/secrets/openai-api-key; then
2424
fi
2525

2626
if test -n "\${OPENAI_API_KEY}"; then
27-
echo "Using OpenAI with \${MODEL_NAME}"
27+
echo "Using OpenAI with \${OPENAI_MODEL_NAME}"
2828
export LLM_AGENT_MODEL_PROVIDER=openai
2929
export LLM_AGENT_MODEL_NAME=\${OPENAI_MODEL_NAME}
3030
else

a2a/README.md

Lines changed: 24 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,11 @@
11
# 🧠 A2A Multi-Agent Fact Checker
22

3-
This project demonstrates a **collaborative multi-agent system** built with the **Agent2Agent SDK** ([A2A]),
4-
where a top-level Auditor agent coordinates the workflow to verify facts. The Critic agent gathers evidence
5-
via live internet searches using **DuckDuckGo** through the Model Context Protocol (**MCP**), while the Reviser
6-
agent analyzes and refines the conclusion using internal reasoning alone. The system showcases how agents
7-
with distinct roles and tools can **collaborate under orchestration**.
3+
This project demonstrates a **collaborative multi-agent system** built with the **Agent2Agent SDK** ([A2A])
4+
and [OpenAI](https://platform.openai.com/api-keys), where a top-level Auditor agent coordinates the workflow
5+
to verify facts. The Critic agent gathers evidence via live internet searches using **DuckDuckGo** through
6+
the Model Context Protocol (**MCP**), while the Reviser agent analyzes and refines the conclusion using
7+
internal reasoning alone. The system showcases how agents with distinct roles and tools can
8+
**collaborate under orchestration**.
89

910
> [!Tip]
1011
> ✨ No configuration needed — run it with a single command.
@@ -25,41 +26,41 @@ with distinct roles and tools can **collaborate under orchestration**.
2526
+ **A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you don't have a GPU, you can alternatively use [**Docker Offload**](https://www.docker.com/products/docker-offload).
2627
+ If you're using Docker Engine on Linux or Docker Desktop on Windows, ensure that the [Docker Model Runner requirements](https://docs.docker.com/ai/model-runner/) are met (specifically that GPU support is enabled) and the necessary drivers are installed
2728
+ If you're using Docker Engine on Linux, ensure you have Compose 2.38.1 or later installed
29+
+ An [OpenAI API Key](https://platform.openai.com/api-keys) 🔑
2830

2931
### Run the project
3032

33+
Create a `secret.openai-api-key` file with your OpenAI API key:
3134

32-
```sh
33-
docker compose up --build
35+
```
36+
sk-...
3437
```
3538

36-
No configuration needed — everything runs from the container. Open `http://localhost:8080` in your browser to and select `AgentKit` in the selector at the top left, then chat with
37-
the agents.
38-
39-
Using Docker Offload with GPU support, you can run the same demo with a larger model that takes advantage of a more powerful GPU on the remote instance:
39+
Then run:
4040

4141
```sh
42-
docker compose -f compose.yaml -f compose.offload.yaml up --build
42+
docker compose up --build
4343
```
4444

45-
# 🧠 Inference Options
46-
47-
By default, this project uses [Docker Model Runner] to handle LLM inference locally — no internet connection or external API key is required.
45+
Everything runs from the container. Open `http://localhost:8080` in your browser and then chat with
46+
the agents.
4847

49-
If you’d prefer to use OpenAI instead:
48+
# 🧠 Inference Options
5049

51-
1. Create a `secret.openai-api-key` file with your OpenAI API key:
50+
By default, this project uses [OpenAI](https://platform.openai.com) to handle LLM inference. If you'd prefer
51+
to use a local LLM instead, run:
5252

53-
```
54-
sk-...
53+
```sh
54+
docker compose -f compose.dmr.yaml up
5555
```
5656

57-
2. Restart the project with the OpenAI configuration:
57+
Using [**Docker Offload**](https://www.docker.com/products/docker-offload) with GPU support, you can run the
58+
same demo with a larger model that takes advantage of a more powerful GPU on the remote instance:
5859

60+
```sh
61+
docker compose -f compose.dmr.yaml -f compose.offload.yaml up --build
5962
```
60-
docker compose down -v
61-
docker compose -f compose.yaml -f compose.openai.yaml up
62-
```
63+
6364

6465
# ❓ What Can It Do?
6566

a2a/compose.dmr.yaml

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
services:
2+
# Auditor Agent coordinates the entire fact-checking workflow
3+
auditor-agent-a2a:
4+
build:
5+
target: auditor-agent
6+
ports:
7+
- "8080:8080"
8+
environment:
9+
- CRITIC_AGENT_URL=http://critic-agent-a2a:8001
10+
- REVISER_AGENT_URL=http://reviser-agent-a2a:8001
11+
depends_on:
12+
- critic-agent-a2a
13+
- reviser-agent-a2a
14+
models:
15+
gemma3:
16+
endpoint_var: MODEL_RUNNER_URL
17+
model_var: MODEL_RUNNER_MODEL
18+
19+
critic-agent-a2a:
20+
build:
21+
target: critic-agent
22+
environment:
23+
- MCPGATEWAY_ENDPOINT=http://mcp-gateway:8811/sse
24+
depends_on:
25+
- mcp-gateway
26+
models:
27+
gemma3:
28+
# specify which environment variables to inject into the container
29+
endpoint_var: MODEL_RUNNER_URL
30+
model_var: MODEL_RUNNER_MODEL
31+
32+
reviser-agent-a2a:
33+
build:
34+
target: reviser-agent
35+
environment:
36+
- MCPGATEWAY_ENDPOINT=http://mcp-gateway:8811/sse
37+
depends_on:
38+
- mcp-gateway
39+
models:
40+
gemma3:
41+
endpoint_var: MODEL_RUNNER_URL
42+
model_var: MODEL_RUNNER_MODEL
43+
44+
mcp-gateway:
45+
# mcp-gateway secures your MCP servers
46+
image: docker/mcp-gateway:latest
47+
use_api_socket: true
48+
command:
49+
- --transport=sse
50+
- --servers=duckduckgo
51+
# add an MCP interceptor to log the responses
52+
- --interceptor
53+
- after:exec:echo RESPONSE=$(cat) >&2
54+
55+
models:
56+
# declare LLM models to pull and use
57+
gemma3:
58+
model: ai/gemma3:4B-Q4_0
59+
context_size: 10000 # 3.5 GB VRAM
60+
#context_size: 131000 # 7.6 GB VRAM

a2a/compose.openai.yaml

Lines changed: 0 additions & 21 deletions
This file was deleted.

a2a/compose.yaml

Lines changed: 12 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -8,38 +8,34 @@ services:
88
environment:
99
- CRITIC_AGENT_URL=http://critic-agent-a2a:8001
1010
- REVISER_AGENT_URL=http://reviser-agent-a2a:8001
11+
- OPENAI_MODEL_NAME=o3
12+
secrets:
13+
- openai-api-key
1114
depends_on:
1215
- critic-agent-a2a
1316
- reviser-agent-a2a
14-
models:
15-
gemma3:
16-
endpoint_var: MODEL_RUNNER_URL
17-
model_var: MODEL_RUNNER_MODEL
1817

1918
critic-agent-a2a:
2019
build:
2120
target: critic-agent
2221
environment:
2322
- MCPGATEWAY_ENDPOINT=http://mcp-gateway:8811/sse
23+
- OPENAI_MODEL_NAME=o3
24+
secrets:
25+
- openai-api-key
2426
depends_on:
2527
- mcp-gateway
26-
models:
27-
gemma3:
28-
# specify which environment variables to inject into the container
29-
endpoint_var: MODEL_RUNNER_URL
30-
model_var: MODEL_RUNNER_MODEL
3128

3229
reviser-agent-a2a:
3330
build:
3431
target: reviser-agent
3532
environment:
3633
- MCPGATEWAY_ENDPOINT=http://mcp-gateway:8811/sse
34+
- OPENAI_MODEL_NAME=o3
35+
secrets:
36+
- openai-api-key
3737
depends_on:
3838
- mcp-gateway
39-
models:
40-
gemma3:
41-
endpoint_var: MODEL_RUNNER_URL
42-
model_var: MODEL_RUNNER_MODEL
4339

4440
mcp-gateway:
4541
# mcp-gateway secures your MCP servers
@@ -52,9 +48,6 @@ services:
5248
- --interceptor
5349
- after:exec:echo RESPONSE=$(cat) >&2
5450

55-
models:
56-
# declare LLM models to pull and use
57-
gemma3:
58-
model: ai/gemma3:4B-Q4_0
59-
context_size: 10000 # 3.5 GB VRAM
60-
#context_size: 131000 # 7.6 GB VRAM
51+
secrets:
52+
openai-api-key:
53+
file: secret.openai-api-key

0 commit comments

Comments
 (0)