You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: agno/README.md
+51-35Lines changed: 51 additions & 35 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,9 @@
1
1
# 🧠 Agno GitHub Issue Analyzer
2
2
3
-
This project demonstrates a **collaborative multi-agent system** built with [Agno], where specialized agents work together to analyze GitHub repositories. The **Coordinator** orchestrates the workflow between a **GitHub Issue Retriever** agent that fetches open issues via the **GitHub MCP Server**, and a **Writer** agent that summarizes and categorizes them into a comprehensive markdown report.
3
+
This project demonstrates a **collaborative multi-agent system** built with [Agno], where specialized
4
+
agents work together to analyze GitHub repositories. The **Coordinator** orchestrates the workflow
5
+
between a **GitHub Issue Retriever** agent that fetches open issues via the **GitHub MCP Server**, and
6
+
a **Writer** agent that summarizes and categorizes them into a comprehensive markdown report.
4
7
5
8
> [!Tip]
6
9
> ✨ No complex configuration needed — just add your GitHub token and run with a single command.
@@ -9,24 +12,28 @@ This project demonstrates a **collaborative multi-agent system** built with [Agn
9
12
10
13
### Requirements
11
14
12
-
+**[Docker Desktop](https://www.docker.com/products/docker-desktop/) 4.43.0+ or [Docker Engine](https://docs.docker.com/engine/)** installed
13
-
+**A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you don't have a GPU, you can alternatively use [**Docker Offload**](https://www.docker.com/products/docker-offload).
14
-
+ If you're using Docker Engine on Linux or Docker Desktop on Windows, ensure that the [Docker Model Runner requirements](https://docs.docker.com/ai/model-runner/) are met (specifically that GPU support is enabled) and the necessary drivers are installed
15
-
+ If you're using Docker Engine on Linux, ensure you have Compose 2.38.1 or later installed
15
+
+**[Docker Desktop] 4.43.0+ or [Docker Engine]** installed.
16
+
+**A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you
17
+
don't have a GPU, you can alternatively use **[Docker Offload]**.
18
+
+ If you're using [Docker Engine] on Linux or [Docker Desktop] on Windows, ensure that the
19
+
[Docker Model Runner requirements] are met (specifically that GPU
20
+
support is enabled) and the necessary drivers are installed.
21
+
+ If you're using Docker Engine on Linux, ensure you have [Docker Compose] 2.38.1 or later installed.
16
22
+ 🔑 GitHub Personal Access Token (for public repositories)
17
23
18
24
### Setup
19
25
20
26
1.**Create a GitHub Personal Access Token:**
21
-
- Navigate to https://github.com/settings/personal-access-tokens
27
+
- Navigate to <https://github.com/settings/personal-access-tokens>
22
28
- Create a fine-grained token with **read access to public repositories**
@@ -36,44 +43,49 @@ This project demonstrates a **collaborative multi-agent system** built with [Agn
36
43
docker compose up --build
37
44
```
38
45
39
-
Using Docker Offload with GPU support, you can run the same demo with a larger model that takes advantage of a more powerful GPU on the remote instance:
46
+
Using Docker Offload with GPU support, you can run the same demo with a larger model that takes advantage
47
+
of a more powerful GPU on the remote instance:
48
+
40
49
```sh
41
50
docker compose -f compose.yaml -f compose.offload.yaml up --build
42
51
```
43
52
44
-
That's all! The agents will spin up automatically. Open **http://localhost:3000** in your browser to interact with the multi-agent system.
53
+
That's all! The agents will spin up automatically. Open **<http://localhost:3000>** in your browser to
54
+
interact with the multi-agent system.
45
55
46
56
# 🧠 Inference Options
47
57
48
-
By default, this project uses [Docker Model Runner] to handle LLM inference locally — no internet connection or external API key is required.
58
+
By default, this project uses [Docker Model Runner] to handle LLM inference locally — no internet
59
+
connection or external API key is required.
49
60
50
61
If you’d prefer to use OpenAI instead:
51
62
52
63
1. Create a `secret.openai-api-key` file with your OpenAI API key:
53
64
54
-
```
55
-
sk-...
56
-
```
65
+
```plaintext
66
+
sk-...
67
+
```
57
68
58
69
2. Restart the project with the OpenAI configuration:
59
70
60
-
```
61
-
docker compose down -v
62
-
docker compose -f compose.yaml -f compose.openai.yaml up
63
-
```
71
+
```sh
72
+
docker compose down -v
73
+
docker compose -f compose.yaml -f compose.openai.yaml up
74
+
```
64
75
65
76
# ❓ What Can It Do?
66
77
67
78
Give it any public GitHub repository and watch the agents collaborate to deliver a comprehensive analysis:
68
79
69
-
-**Fetch Issues**: The GitHub agent retrieves all open issues with their details
70
-
-**Analyze & Categorize**: The Writer agent classifies issues into categories (bugs, features, documentation)
71
-
-**Generate Report**: Creates a structured markdown summary with issue links and descriptions
80
+
+ **Fetch Issues**: The GitHub agent retrieves all open issues with their details
81
+
+ **Analyze & Categorize**: The Writer agent classifies issues into categories (bugs, features, documentation)
82
+
+ **Generate Report**: Creates a structured markdown summary with issue links and descriptions
72
83
73
84
**Example queries:**
74
-
-`summarize the issues in the repo microsoft/vscode`
75
-
-`analyze issues in facebook/react`
76
-
-`categorize the problems in tensorflow/tensorflow`
85
+
86
+
+ `summarize the issues in the repo microsoft/vscode`
87
+
+ `analyze issues in facebook/react`
88
+
+ `categorize the problems in tensorflow/tensorflow`
77
89
78
90
The **Coordinator** orchestrates the entire workflow, ensuring each agent performs its specialized task efficiently.
79
91
@@ -122,21 +134,22 @@ flowchart TD
122
134
end
123
135
```
124
136
125
-
- The **Coordinator** orchestrates the multi-agent workflow using Agno's team coordination
126
-
-**GitHub Issue Retriever** connects to GitHub via the secure MCP Gateway
127
-
-**Writer** processes and categorizes the retrieved data into structured reports
128
-
- All agents use **Docker Model Runner** with Qwen 3 for local LLM inference
129
-
- The **Next.js UI** provides an intuitive chat interface for repository analysis
137
+
+ The **Coordinator** orchestrates the multi-agent workflow using Agno's team coordination
138
+
+**GitHub Issue Retriever** connects to GitHub via the secure MCP Gateway
139
+
+**Writer** processes and categorizes the retrieved data into structured reports
140
+
+ All agents use **[Docker Model Runner]** with Qwen 3 for local LLM inference
141
+
+ The **Next.js UI** provides an intuitive chat interface for repository analysis
130
142
131
143
# 🛠️ Agent Configuration
132
144
133
145
The agents are configured in `agents.yaml` with specific roles and instructions:
134
146
135
-
-**GitHub Agent**: Specialized in retrieving GitHub issues with precise API calls
136
-
-**Writer Agent**: Expert in summarization and categorization with markdown formatting
137
-
-**Coordinator Team**: Orchestrates the workflow between specialized agents
147
+
+**GitHub Agent**: Specialized in retrieving GitHub issues with precise API calls
148
+
+**Writer Agent**: Expert in summarization and categorization with markdown formatting
149
+
+**Coordinator Team**: Orchestrates the workflow between specialized agents
138
150
139
-
Each agent uses the **Docker Model Runner** for inference, ensuring consistent performance without external API dependencies.
151
+
Each agent uses the **[Docker Model Runner]** for inference, ensuring consistent performance without
152
+
external API dependencies.
140
153
141
154
# 🧹 Cleanup
142
155
@@ -148,12 +161,15 @@ docker compose down -v
148
161
149
162
# 📎 Credits
150
163
151
-
-[Agno] - Multi-agent framework
152
-
-[GitHub MCP Server] - Model Context Protocol integration
153
-
-[Docker Compose] - Container orchestration
164
+
+[Agno] - Multi-agent framework
165
+
+[GitHub MCP Server] - Model Context Protocol integration
Copy file name to clipboardExpand all lines: agno/agent-ui/README.md
+19-14Lines changed: 19 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,10 @@
1
1
# Agent UI
2
2
3
-
A modern chat interface for AI agents built with Next.js, Tailwind CSS, and TypeScript. This template provides a ready-to-use UI for interacting with Agno agents.
3
+
A modern chat interface for AI agents built with Next.js, Tailwind CSS, and TypeScript. This template
4
+
provides a ready-to-use UI for interacting with Agno agents.
@@ -18,7 +20,9 @@ A modern chat interface for AI agents built with Next.js, Tailwind CSS, and Type
18
20
19
21
### Prerequisites
20
22
21
-
Before setting up Agent UI, you may want to have an Agno Playground running. If you haven't set up the Agno Playground yet, follow the [official guide](https://agno.link/agent-ui#connect-to-local-agents) to run the Playground locally.
23
+
Before setting up Agent UI, you may want to have an Agno Playground running. If you haven't set up the
24
+
Agno Playground yet, follow the [official guide](https://agno.link/agent-ui#connect-to-local-agents)
4. Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
53
57
54
58
## Connecting to an Agent Backend
55
59
56
-
By default Agent UI connects to `http://localhost:7777`. You can easily change this by hovering over the endpoint URL and clicking the edit option.
60
+
By default Agent UI connects to `http://localhost:7777`. You can easily change this by hovering over the
61
+
endpoint URL and clicking the edit option.
57
62
58
63
The default endpoint works with the standard Agno Playground setup described in the [official documentation](https://agno.link/agent-ui#connect-to-local-agents).
0 commit comments