You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: front_end/panels/ai_chat/agent_framework/implementation/agents/ContentWriterAgent.ts
+6Lines changed: 6 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -25,6 +25,12 @@ You are specifically designed to collaborate with the research_agent. When you r
25
25
- Collected research data, which may include web content, extractions, analysis, and other information
26
26
- Your job is to organize this information into a comprehensive, well-structured report
27
27
28
+
Use the session file workspace as your shared knowledge base:
29
+
- Immediately call 'list_files' to discover research artifacts (notes, structured datasets, outstanding questions) created earlier in the session.
30
+
- Read the relevant files before outlining to understand what has already been captured, current confidence levels, and any gaps that remain.
31
+
- If the handoff references specific files, open them with 'read_file' and incorporate their contents, citing source filenames or URLs when appropriate.
32
+
- Persist your outline, intermediate synthesis, and final report with 'create_file'/'update_file' so future revisions or downstream agents can reuse the material.
33
+
28
34
Your process should follow these steps:
29
35
1. Carefully analyze all the research data provided during the handoff
30
36
2. Identify key themes, findings, and important information from the data
Copy file name to clipboardExpand all lines: front_end/panels/ai_chat/agent_framework/implementation/agents/ResearchAgent.ts
+12-3Lines changed: 12 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -56,13 +56,20 @@ First, think through the task thoroughly:
56
56
- **html_to_markdown**: Use when you need high-quality page text in addition to (not instead of) structured extractions.
57
57
- **fetcher_tool**: BATCH PROCESS multiple URLs at once - accepts an array of URLs to save tool calls
58
58
59
+
### 3. Workspace Coordination
60
+
- Treat the file management tools as your shared scratchpad with other agents in the session.
61
+
- Start each iteration by calling 'list_files' and 'read_file' on any artifacts relevant to your task so you understand existing progress.
62
+
- Persist work products incrementally with 'create_file'/'update_file'. Use descriptive names (e.g. 'research/<topic>-sources.json') and include agent name, timestamp, query used, and quality notes so others can audit or extend the work.
63
+
- Append to existing files when adding new findings; only delete files if they are obsolete AND all valuable information is captured elsewhere.
64
+
- Record open questions or follow-ups in dedicated tracking files so parallel subtasks avoid duplicating effort.
65
+
59
66
**CRITICAL - Batch URL Fetching**:
60
67
- The fetcher_tool accepts an ARRAY of URLs: {urls: [url1, url2, url3], reasoning: "..."}
61
68
- ALWAYS batch multiple URLs together instead of calling fetcher_tool multiple times
62
69
- Example: After extracting 5 URLs from search results, call fetcher_tool ONCE with all 5 URLs
63
70
- This dramatically reduces tool calls and improves efficiency
64
71
65
-
### 3. Research Loop (OODA)
72
+
### 4. Research Loop (OODA)
66
73
Execute an excellent Observe-Orient-Decide-Act loop:
67
74
68
75
**Observe**: What information has been gathered? What's still needed?
@@ -83,7 +90,7 @@ Execute an excellent Observe-Orient-Decide-Act loop:
83
90
- NEVER repeat the same query - adapt based on findings
84
91
- If hitting diminishing returns, complete the task immediately
85
92
86
-
### 4. Source Quality Evaluation
93
+
### 5. Source Quality Evaluation
87
94
Think critically about sources:
88
95
- Distinguish facts from speculation (watch for "could", "may", future tense)
89
96
- Identify problematic sources (aggregators vs. originals, unconfirmed reports)
@@ -143,7 +150,9 @@ When your research is complete:
143
150
3. The handoff tool expects: {query: "research topic", reasoning: "explanation for user"}
144
151
4. The content_writer_agent will create the final report from your research data
145
152
146
-
Remember: You gather data, content_writer_agent writes the report. Always hand off when research is complete.`,
153
+
Remember: You gather data, content_writer_agent writes the report. Always hand off when research is complete.
154
+
155
+
Before handing off, ensure your latest findings are reflected in the shared files (e.g. summaries, raw notes, structured datasets). This enables the orchestrator and content writer to understand what has been completed, reuse your artifacts, and avoid redundant rework.`,
Copy file name to clipboardExpand all lines: front_end/panels/ai_chat/agent_framework/implementation/agents/SearchAgent.ts
+4Lines changed: 4 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -23,6 +23,7 @@ export function createSearchAgentConfig(): AgentToolConfig {
23
23
## Operating Principles
24
24
- Stay laser-focused on the requested objective; avoid broad reports or narrative summaries.
25
25
- Work fast but carefully: prioritize high-signal queries, follow source leads, and stop once the objective is satisfied with high confidence.
26
+
- Use the session file workspace to coordinate: list existing files before launching new queries, read relevant artifacts, record harvested leads or verified results with 'create_file'/'update_file', and append incremental progress instead of creating overlapping files.
26
27
- Never fabricate data. Every attribute you return must be traceable to at least one cited source that you personally inspected.
27
28
28
29
## Search Workflow
@@ -32,6 +33,7 @@ export function createSearchAgentConfig(): AgentToolConfig {
32
33
- Use navigate_url to reach the most relevant search entry point (search engines, directories, LinkedIn public results, company pages, press releases).
33
34
- Use extract_data with an explicit JSON schema every time you capture structured search results. Prefer capturing multiple leads in one call.
34
35
- Batch follow-up pages with fetcher_tool, and use html_to_markdown when you need to confirm context inside long documents.
36
+
- After each significant batch of new leads or fetcher_tool response, immediately persist the harvested candidates (including query, timestamp, and confidence notes) by appending to a coordination file via 'create_file'/'update_file'. This keeps other subtasks aligned and prevents redundant scraping.
35
37
4. **Mandatory Pagination Loop (ENFORCED)**:
36
38
- Harvest target per task: collect 30–50 unique candidates before enrichment (unless the user specifies otherwise). Absolute minimum 25 when the request requires it.
37
39
- If current unique candidates < target, you MUST navigate to additional result pages and continue extraction.
@@ -47,6 +49,7 @@ export function createSearchAgentConfig(): AgentToolConfig {
47
49
5. **Verify**:
48
50
- Cross-check critical attributes (e.g. confirm an email’s domain matches the company, confirm a title with two independent sources when possible).
49
51
- Flag low-confidence findings explicitly in the output.
52
+
- Document verification status in the appropriate coordination file so other agents can see what has been confirmed and which leads still require attention.
50
53
6. **Decide completeness**: Stop once required attributes are filled for the requested number of entities or additional searching would be duplicative.
51
54
52
55
## Tooling Rules
@@ -57,6 +60,7 @@ export function createSearchAgentConfig(): AgentToolConfig {
57
60
})
58
61
- Use html_to_markdown when you need high-quality page text in addition to (not instead of) structured extractions.
59
62
- Never call extract_data or fetcher_tool without a clear plan for how the results will fill gaps in the objective.
63
+
- Before starting new queries, call 'list_files'/'read_file' to review previous batches and avoid duplicating work; always append incremental findings to the existing coordination file for the current objective.
60
64
61
65
### Pagination and Next Page Handling
62
66
- Prefer loading additional results directly in the SERP:
Copy file name to clipboardExpand all lines: front_end/panels/ai_chat/core/BaseOrchestratorAgent.ts
+7Lines changed: 7 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -57,6 +57,7 @@ Always delegate investigative work to the 'search_agent' tool so it can gather v
57
57
58
58
- Launch search_agent with a clear objective, attribute list, filters, and quantity requirement.
59
59
- Review the JSON output, double-check confidence values and citations, and surface the most credible findings.
60
+
- Use the file management tools ('create_file', 'update_file', 'read_file', 'list_files') to coordinate multi-step fact-finding. Persist subtask outputs as you go, read existing files before launching overlapping searches, and append incremental findings rather than duplicating effort.
60
61
- If the user pivots into broad synthesis or long-form reporting, switch to the 'research_agent'.
61
62
- Keep responses concise, cite the strongest sources, and present the structured findings provided by the agent.
62
63
@@ -126,6 +127,12 @@ Based on query type, develop a specific research plan:
126
127
- Synthesizing findings
127
128
- Identifying gaps and deploying additional agents as needed
128
129
130
+
**Coordinate through session files:**
131
+
- Before launching a new subtask, call 'list_files' to inspect existing outputs and avoid duplication.
132
+
- Persist each subtask's plan, raw notes, and structured results with 'create_file'/'update_file'. Include timestamps and ownership so other agents can build on the work.
133
+
- Encourage sub-agents to read relevant files ('read_file') before acting, and to append updates instead of overwriting unless the instructions explicitly call for replacement.
134
+
- Use file summaries to track progress, surface blockers, and keep an audit trail for the final synthesis.
135
+
129
136
**Clear instructions to research agents must include:**
130
137
- Specific research objectives (ideally one core objective per agent)
131
138
- Expected output format with emphasis on collecting detailed, comprehensive data
0 commit comments