You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Use markdownlint-cli2 to lint markdown files
- Add a workflow to trigger on PRs and after pushing, to lint all
markdown files
- Add a minimal configuration for the linter
- Update all .md files to abide by the linter rules
Copy file name to clipboardExpand all lines: README.md
+28-15Lines changed: 28 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,43 +2,50 @@
2
2
3
3
## Prerequisites
4
4
5
-
+**[Docker Desktop](https://www.docker.com/products/docker-desktop/) 4.43.0+ or [Docker Engine](https://docs.docker.com/engine/)** installed
6
-
+**A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you don't have a GPU, you can alternatively use [**Docker Offload**](https://www.docker.com/products/docker-offload).
7
-
+ If you're using Docker Engine on Linux or Docker Desktop on Windows, ensure that the [Docker Model Runner requirements](https://docs.docker.com/ai/model-runner/) are met (specifically that GPU support is enabled) and the necessary drivers are installed
8
-
+ If you're using Docker Engine on Linux, ensure you have Compose 2.38.1 or later installed
5
+
+**[Docker Desktop] 4.43.0+ or [Docker Engine]** installed.
6
+
+**A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you
7
+
don't have a GPU, you can alternatively use **[Docker Offload]**.
8
+
+ If you're using [Docker Engine] on Linux or [Docker Desktop] on Windows, ensure that the
9
+
[Docker Model Runner requirements] are met (specifically that GPU
10
+
support is enabled) and the necessary drivers are installed.
11
+
+ If you're using Docker Engine on Linux, ensure you have [Docker Compose] 2.38.1 or later installed.
9
12
10
13
## Demos
11
14
12
-
Each of these demos is self-contained and can be run either locally or using a cloud context. They are all configured using two steps.
15
+
Each of these demos is self-contained and can be run either locally or using a cloud context. They
16
+
are all configured using two steps.
13
17
14
18
1. change directory to the root of the demo project
15
-
1. create a `.mcp.env` file from the `mcp.env.example` file (if it exists, otherwise the demo doesn't need any secrets) and supply the required MCP tokens
16
-
1. run `docker compose up --build`
19
+
2. create a `.mcp.env` file from the `mcp.env.example` file (if it exists, otherwise the demo
20
+
doesn't need any secrets) and supply the required MCP tokens
21
+
3. run `docker compose up --build`
17
22
18
23
### Using OpenAI models
19
24
20
25
The demos support using OpenAI models instead of running models locally with Docker Model Runner. To use OpenAI:
26
+
21
27
1. Create a `secret.openai-api-key` file with your OpenAI API key:
22
28
23
-
```
24
-
sk-...
25
-
```
29
+
```plaintext
30
+
sk-...
31
+
```
32
+
26
33
2. Start the project with the OpenAI configuration:
27
34
28
-
```
29
-
docker compose -f compose.yaml -f compose.openai.yaml up
30
-
```
35
+
```sh
36
+
docker compose -f compose.yaml -f compose.openai.yaml up
Copy file name to clipboardExpand all lines: a2a/README.md
+26-26Lines changed: 26 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,6 @@ internal reasoning alone. The system showcases how agents with distinct roles an
10
10
> [!Tip]
11
11
> ✨ No configuration needed — run it with a single command.
12
12
13
-
14
13
<palign="center">
15
14
<img src="demo.gif"
16
15
alt="A2A Multi-Agent Fact Check Demo"
@@ -22,17 +21,20 @@ internal reasoning alone. The system showcases how agents with distinct roles an
22
21
23
22
### Requirements
24
23
25
-
+**[Docker Desktop](https://www.docker.com/products/docker-desktop/) 4.43.0+ or [Docker Engine](https://docs.docker.com/engine/)** installed
26
-
+**A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you don't have a GPU, you can alternatively use [**Docker Offload**](https://www.docker.com/products/docker-offload).
27
-
+ If you're using Docker Engine on Linux or Docker Desktop on Windows, ensure that the [Docker Model Runner requirements](https://docs.docker.com/ai/model-runner/) are met (specifically that GPU support is enabled) and the necessary drivers are installed
28
-
+ If you're using Docker Engine on Linux, ensure you have Compose 2.38.1 or later installed
29
-
+ An [OpenAI API Key](https://platform.openai.com/api-keys) 🔑
24
+
+**[Docker Desktop] 4.43.0+ or [Docker Engine]** installed.
25
+
+**A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you
26
+
don't have a GPU, you can alternatively use **[Docker Offload]**.
27
+
+ If you're using [Docker Engine] on Linux or [Docker Desktop] on Windows, ensure that the
28
+
[Docker Model Runner requirements] are met (specifically that GPU
29
+
support is enabled) and the necessary drivers are installed.
30
+
+ If you're using Docker Engine on Linux, ensure you have [Docker Compose] 2.38.1 or later installed.
31
+
+ An [OpenAI API Key](https://platform.openai.com/api-keys) 🔑.
30
32
31
33
### Run the project
32
34
33
35
Create a `secret.openai-api-key` file with your OpenAI API key:
34
36
35
-
```
37
+
```plaintext
36
38
sk-...
37
39
```
38
40
@@ -61,18 +63,17 @@ same demo with a larger model that takes advantage of a more powerful GPU on the
61
63
docker compose -f compose.dmr.yaml -f compose.offload.yaml up --build
62
64
```
63
65
64
-
65
66
# ❓ What Can It Do?
66
67
67
68
This system performs multi-agent fact verification, coordinated by an **Auditor**:
68
69
69
-
- 🧑⚖️ **Auditor**:
70
-
- Orchestrates the process from input to verdict.
71
-
- Delegates tasks to Critic and Reviser agents.
72
-
- 🧠 **Critic**:
73
-
- Uses DuckDuckGo via MCP to gather real-time external evidence.
74
-
-✍️ **Reviser**:
75
-
- Refines and verifies the Critic’s conclusions using only reasoning.
70
+
+ 🧑⚖️ **Auditor**:
71
+
* Orchestrates the process from input to verdict.
72
+
* Delegates tasks to Critic and Reviser agents.
73
+
+ 🧠 **Critic**:
74
+
* Uses DuckDuckGo via MCP to gather real-time external evidence.
75
+
+✍️ **Reviser**:
76
+
* Refines and verifies the Critic’s conclusions using only reasoning.
76
77
77
78
**🧠 All agents use the Docker Model Runner for LLM-based inference.**
78
79
@@ -89,7 +90,6 @@ Example question:
89
90
|`src/AgentKit`| Agent runtime |
90
91
|`agents/*.yaml`| Agent definitions |
91
92
92
-
93
93
# 🔧 Architecture Overview
94
94
95
95
```mermaid
@@ -115,10 +115,10 @@ flowchart TD
115
115
116
116
```
117
117
118
-
- The Auditor is a Sequential Agent, it coordinates Critic and Reviser agents to verify user-provided claims.
119
-
- The Critic agent performs live web searches through DuckDuckGo using an MCP-compatible gateway.
120
-
- The Reviser agent refines the Critic’s conclusions using internal reasoning alone.
121
-
- All agents run inference through a Docker-hosted Model Runner, enabling fully containerized LLM reasoning.
118
+
+ The Auditor is a Sequential Agent, it coordinates Critic and Reviser agents to verify user-provided claims.
119
+
+ The Critic agent performs live web searches through DuckDuckGo using an MCP-compatible gateway.
120
+
+ The Reviser agent refines the Critic’s conclusions using internal reasoning alone.
121
+
+ All agents run inference through a Docker-hosted Model Runner, enabling fully containerized LLM reasoning.
122
122
123
123
# 🤝 Agent Roles
124
124
@@ -128,7 +128,6 @@ flowchart TD
128
128
|**Critic**| ✅ DuckDuckGo via MCP | Gathers evidence to support or refute the claim |
129
129
|**Reviser**| ❌ None | Refines and finalizes the answer without external input |
130
130
131
-
132
131
# 🧹 Cleanup
133
132
134
133
To stop and remove containers and volumes:
@@ -137,15 +136,16 @@ To stop and remove containers and volumes:
Copy file name to clipboardExpand all lines: adk-cerebras/README.md
+33-13Lines changed: 33 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,31 +1,39 @@
1
1
# DevDuck agents
2
2
3
-
A multi-agent system for Go programming assistance built with Google Agent Development Kit (ADK). This project features a coordinating agent (DevDuck) that manages two specialized sub-agents (Bob and Cerebras) for different programming tasks.
3
+
A multi-agent system for Go programming assistance built with Google Agent Development Kit (ADK). This
4
+
project features a coordinating agent (DevDuck) that manages two specialized sub-agents (Bob and
5
+
Cerebras) for different programming tasks.
4
6
5
7
## Architecture
6
8
7
-
The system consists of three main agents orchestrated by Docker Compose, which plays a **primordial role** in launching and coordinating all agent services:
9
+
The system consists of three main agents orchestrated by Docker Compose, which plays a
10
+
**primordial role** in launching and coordinating all agent services:
8
11
9
12
### 🐙 Docker Compose Orchestration
13
+
10
14
-**Central Role**: Docker Compose serves as the foundation for the entire multi-agent system
11
15
-**Service Orchestration**: Manages the lifecycle of all three agents (DevDuck, Bob, and Cerebras)
12
-
-**Configuration Management**: Defines agent prompts, model configurations, and service dependencies directly in the compose file
16
+
-**Configuration Management**: Defines agent prompts, model configurations, and service dependencies
17
+
directly in the compose file
13
18
-**Network Coordination**: Establishes secure inter-agent communication channels
14
19
-**Environment Management**: Handles API keys, model parameters, and runtime configurations
15
20
16
-
### Agent Components:
21
+
### Agent Components
17
22
18
23
### 🦆 DevDuck (Main Agent)
24
+
19
25
-**Role**: Main development assistant and project coordinator
@@ -43,21 +51,25 @@ The system consists of three main agents orchestrated by Docker Compose, which p
43
51
44
52
### Prerequisites
45
53
46
-
+**[Docker Desktop](https://www.docker.com/products/docker-desktop/) 4.43.0+ or [Docker Engine](https://docs.docker.com/engine/)** installed
47
-
+**A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you don't have a GPU, you can alternatively use [**Docker Offload**](https://www.docker.com/products/docker-offload).
48
-
+ If you're using Docker Engine on Linux or Docker Desktop on Windows, ensure that the [Docker Model Runner requirements](https://docs.docker.com/ai/model-runner/) are met (specifically that GPU support is enabled) and the necessary drivers are installed
49
-
+ If you're using Docker Engine on Linux, ensure you have Compose 2.38.1 or later installed
54
+
-**[Docker Desktop] 4.43.0+ or [Docker Engine]** installed.
55
+
-**A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you
56
+
don't have a GPU, you can alternatively use **[Docker Offload]**.
57
+
- If you're using [Docker Engine] on Linux or [Docker Desktop] on Windows, ensure that the
58
+
[Docker Model Runner requirements] are met (specifically that GPU
59
+
support is enabled) and the necessary drivers are installed.
60
+
- If you're using Docker Engine on Linux, ensure you have [Docker Compose] 2.38.1 or later installed.
50
61
51
62
### Configuration
52
63
53
-
1.**You need a Cerebras API Key**: https://cloud.cerebras.ai/
64
+
1.**You need a Cerebras API Key**: <https://cloud.cerebras.ai/>
54
65
2. Create a `.env` file with the following content:
0 commit comments