You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|[Vercel AI-SDK](https://github.com/vercel/ai) Chat-UI for mixing MCPs and Model | Single Agent | llama3.2(local), qwen3(local) | wikipedia-mcp, brave, resend(email) |[./vercel](./vercel)|[compose.yaml](https://github.com/slimslenderslacks/scira-mcp-chat/blob/main/compose.yaml)|
Copy file name to clipboardExpand all lines: a2a/README.md
+24-23Lines changed: 24 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,11 @@
1
1
# 🧠 A2A Multi-Agent Fact Checker
2
2
3
-
This project demonstrates a **collaborative multi-agent system** built with the **Agent2Agent SDK** ([A2A]),
4
-
where a top-level Auditor agent coordinates the workflow to verify facts. The Critic agent gathers evidence
5
-
via live internet searches using **DuckDuckGo** through the Model Context Protocol (**MCP**), while the Reviser
6
-
agent analyzes and refines the conclusion using internal reasoning alone. The system showcases how agents
7
-
with distinct roles and tools can **collaborate under orchestration**.
3
+
This project demonstrates a **collaborative multi-agent system** built with the **Agent2Agent SDK** ([A2A])
4
+
and [OpenAI](https://platform.openai.com/api-keys), where a top-level Auditor agent coordinates the workflow
5
+
to verify facts. The Critic agent gathers evidence via live internet searches using **DuckDuckGo** through
6
+
the Model Context Protocol (**MCP**), while the Reviser agent analyzes and refines the conclusion using
7
+
internal reasoning alone. The system showcases how agents with distinct roles and tools can
8
+
**collaborate under orchestration**.
8
9
9
10
> [!Tip]
10
11
> ✨ No configuration needed — run it with a single command.
@@ -25,41 +26,41 @@ with distinct roles and tools can **collaborate under orchestration**.
25
26
+**A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you don't have a GPU, you can alternatively use [**Docker Offload**](https://www.docker.com/products/docker-offload).
26
27
+ If you're using Docker Engine on Linux or Docker Desktop on Windows, ensure that the [Docker Model Runner requirements](https://docs.docker.com/ai/model-runner/) are met (specifically that GPU support is enabled) and the necessary drivers are installed
27
28
+ If you're using Docker Engine on Linux, ensure you have Compose 2.38.1 or later installed
29
+
+ An [OpenAI API Key](https://platform.openai.com/api-keys) 🔑
28
30
29
31
### Run the project
30
32
33
+
Create a `secret.openai-api-key` file with your OpenAI API key:
31
34
32
-
```sh
33
-
docker compose up --build
35
+
```
36
+
sk-...
34
37
```
35
38
36
-
No configuration needed — everything runs from the container. Open `http://localhost:8080` in your browser to and select `AgentKit` in the selector at the top left, then chat with
37
-
the agents.
38
-
39
-
Using Docker Offload with GPU support, you can run the same demo with a larger model that takes advantage of a more powerful GPU on the remote instance:
39
+
Then run:
40
40
41
41
```sh
42
-
docker compose -f compose.yaml -f compose.offload.yaml up --build
42
+
docker compose up --build
43
43
```
44
44
45
-
# 🧠 Inference Options
46
-
47
-
By default, this project uses [Docker Model Runner] to handle LLM inference locally — no internet connection or external API key is required.
45
+
Everything runs from the container. Open `http://localhost:8080` in your browser and then chat with
46
+
the agents.
48
47
49
-
If you’d prefer to use OpenAI instead:
48
+
# 🧠 Inference Options
50
49
51
-
1. Create a `secret.openai-api-key` file with your OpenAI API key:
50
+
By default, this project uses [OpenAI](https://platform.openai.com) to handle LLM inference. If you'd prefer
51
+
to use a local LLM instead, run:
52
52
53
-
```
54
-
sk-...
53
+
```sh
54
+
docker compose -f compose.dmr.yaml up
55
55
```
56
56
57
-
2. Restart the project with the OpenAI configuration:
57
+
Using [**Docker Offload**](https://www.docker.com/products/docker-offload) with GPU support, you can run the
58
+
same demo with a larger model that takes advantage of a more powerful GPU on the remote instance:
58
59
60
+
```sh
61
+
docker compose -f compose.dmr.yaml -f compose.offload.yaml up --build
59
62
```
60
-
docker compose down -v
61
-
docker compose -f compose.yaml -f compose.openai.yaml up
0 commit comments