Skip to content

Commit 2589bda

Browse files
committed
Update agent prompts with file system management
1 parent 38d57f1 commit 2589bda

File tree

4 files changed

+29
-3
lines changed

4 files changed

+29
-3
lines changed

front_end/panels/ai_chat/agent_framework/implementation/agents/ContentWriterAgent.ts

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,12 @@ You are specifically designed to collaborate with the research_agent. When you r
2525
- Collected research data, which may include web content, extractions, analysis, and other information
2626
- Your job is to organize this information into a comprehensive, well-structured report
2727
28+
Use the session file workspace as your shared knowledge base:
29+
- Immediately call 'list_files' to discover research artifacts (notes, structured datasets, outstanding questions) created earlier in the session.
30+
- Read the relevant files before outlining to understand what has already been captured, current confidence levels, and any gaps that remain.
31+
- If the handoff references specific files, open them with 'read_file' and incorporate their contents, citing source filenames or URLs when appropriate.
32+
- Persist your outline, intermediate synthesis, and final report with 'create_file'/'update_file' so future revisions or downstream agents can reuse the material.
33+
2834
Your process should follow these steps:
2935
1. Carefully analyze all the research data provided during the handoff
3036
2. Identify key themes, findings, and important information from the data

front_end/panels/ai_chat/agent_framework/implementation/agents/ResearchAgent.ts

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -56,13 +56,20 @@ First, think through the task thoroughly:
5656
- **html_to_markdown**: Use when you need high-quality page text in addition to (not instead of) structured extractions.
5757
- **fetcher_tool**: BATCH PROCESS multiple URLs at once - accepts an array of URLs to save tool calls
5858
59+
### 3. Workspace Coordination
60+
- Treat the file management tools as your shared scratchpad with other agents in the session.
61+
- Start each iteration by calling 'list_files' and 'read_file' on any artifacts relevant to your task so you understand existing progress.
62+
- Persist work products incrementally with 'create_file'/'update_file'. Use descriptive names (e.g. 'research/<topic>-sources.json') and include agent name, timestamp, query used, and quality notes so others can audit or extend the work.
63+
- Append to existing files when adding new findings; only delete files if they are obsolete AND all valuable information is captured elsewhere.
64+
- Record open questions or follow-ups in dedicated tracking files so parallel subtasks avoid duplicating effort.
65+
5966
**CRITICAL - Batch URL Fetching**:
6067
- The fetcher_tool accepts an ARRAY of URLs: {urls: [url1, url2, url3], reasoning: "..."}
6168
- ALWAYS batch multiple URLs together instead of calling fetcher_tool multiple times
6269
- Example: After extracting 5 URLs from search results, call fetcher_tool ONCE with all 5 URLs
6370
- This dramatically reduces tool calls and improves efficiency
6471
65-
### 3. Research Loop (OODA)
72+
### 4. Research Loop (OODA)
6673
Execute an excellent Observe-Orient-Decide-Act loop:
6774
6875
**Observe**: What information has been gathered? What's still needed?
@@ -83,7 +90,7 @@ Execute an excellent Observe-Orient-Decide-Act loop:
8390
- NEVER repeat the same query - adapt based on findings
8491
- If hitting diminishing returns, complete the task immediately
8592
86-
### 4. Source Quality Evaluation
93+
### 5. Source Quality Evaluation
8794
Think critically about sources:
8895
- Distinguish facts from speculation (watch for "could", "may", future tense)
8996
- Identify problematic sources (aggregators vs. originals, unconfirmed reports)
@@ -143,7 +150,9 @@ When your research is complete:
143150
3. The handoff tool expects: {query: "research topic", reasoning: "explanation for user"}
144151
4. The content_writer_agent will create the final report from your research data
145152
146-
Remember: You gather data, content_writer_agent writes the report. Always hand off when research is complete.`,
153+
Remember: You gather data, content_writer_agent writes the report. Always hand off when research is complete.
154+
155+
Before handing off, ensure your latest findings are reflected in the shared files (e.g. summaries, raw notes, structured datasets). This enables the orchestrator and content writer to understand what has been completed, reuse your artifacts, and avoid redundant rework.`,
147156
tools: [
148157
'navigate_url',
149158
'navigate_back',

front_end/panels/ai_chat/agent_framework/implementation/agents/SearchAgent.ts

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ export function createSearchAgentConfig(): AgentToolConfig {
2323
## Operating Principles
2424
- Stay laser-focused on the requested objective; avoid broad reports or narrative summaries.
2525
- Work fast but carefully: prioritize high-signal queries, follow source leads, and stop once the objective is satisfied with high confidence.
26+
- Use the session file workspace to coordinate: list existing files before launching new queries, read relevant artifacts, record harvested leads or verified results with 'create_file'/'update_file', and append incremental progress instead of creating overlapping files.
2627
- Never fabricate data. Every attribute you return must be traceable to at least one cited source that you personally inspected.
2728
2829
## Search Workflow
@@ -32,6 +33,7 @@ export function createSearchAgentConfig(): AgentToolConfig {
3233
- Use navigate_url to reach the most relevant search entry point (search engines, directories, LinkedIn public results, company pages, press releases).
3334
- Use extract_data with an explicit JSON schema every time you capture structured search results. Prefer capturing multiple leads in one call.
3435
- Batch follow-up pages with fetcher_tool, and use html_to_markdown when you need to confirm context inside long documents.
36+
- After each significant batch of new leads or fetcher_tool response, immediately persist the harvested candidates (including query, timestamp, and confidence notes) by appending to a coordination file via 'create_file'/'update_file'. This keeps other subtasks aligned and prevents redundant scraping.
3537
4. **Mandatory Pagination Loop (ENFORCED)**:
3638
- Harvest target per task: collect 30–50 unique candidates before enrichment (unless the user specifies otherwise). Absolute minimum 25 when the request requires it.
3739
- If current unique candidates < target, you MUST navigate to additional result pages and continue extraction.
@@ -47,6 +49,7 @@ export function createSearchAgentConfig(): AgentToolConfig {
4749
5. **Verify**:
4850
- Cross-check critical attributes (e.g. confirm an email’s domain matches the company, confirm a title with two independent sources when possible).
4951
- Flag low-confidence findings explicitly in the output.
52+
- Document verification status in the appropriate coordination file so other agents can see what has been confirmed and which leads still require attention.
5053
6. **Decide completeness**: Stop once required attributes are filled for the requested number of entities or additional searching would be duplicative.
5154
5255
## Tooling Rules
@@ -57,6 +60,7 @@ export function createSearchAgentConfig(): AgentToolConfig {
5760
})
5861
- Use html_to_markdown when you need high-quality page text in addition to (not instead of) structured extractions.
5962
- Never call extract_data or fetcher_tool without a clear plan for how the results will fill gaps in the objective.
63+
- Before starting new queries, call 'list_files'/'read_file' to review previous batches and avoid duplicating work; always append incremental findings to the existing coordination file for the current objective.
6064
6165
### Pagination and Next Page Handling
6266
- Prefer loading additional results directly in the SERP:

front_end/panels/ai_chat/core/BaseOrchestratorAgent.ts

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,7 @@ Always delegate investigative work to the 'search_agent' tool so it can gather v
5757
5858
- Launch search_agent with a clear objective, attribute list, filters, and quantity requirement.
5959
- Review the JSON output, double-check confidence values and citations, and surface the most credible findings.
60+
- Use the file management tools ('create_file', 'update_file', 'read_file', 'list_files') to coordinate multi-step fact-finding. Persist subtask outputs as you go, read existing files before launching overlapping searches, and append incremental findings rather than duplicating effort.
6061
- If the user pivots into broad synthesis or long-form reporting, switch to the 'research_agent'.
6162
- Keep responses concise, cite the strongest sources, and present the structured findings provided by the agent.
6263
@@ -126,6 +127,12 @@ Based on query type, develop a specific research plan:
126127
- Synthesizing findings
127128
- Identifying gaps and deploying additional agents as needed
128129
130+
**Coordinate through session files:**
131+
- Before launching a new subtask, call 'list_files' to inspect existing outputs and avoid duplication.
132+
- Persist each subtask's plan, raw notes, and structured results with 'create_file'/'update_file'. Include timestamps and ownership so other agents can build on the work.
133+
- Encourage sub-agents to read relevant files ('read_file') before acting, and to append updates instead of overwriting unless the instructions explicitly call for replacement.
134+
- Use file summaries to track progress, surface blockers, and keep an audit trail for the final synthesis.
135+
129136
**Clear instructions to research agents must include:**
130137
- Specific research objectives (ideally one core objective per agent)
131138
- Expected output format with emphasis on collecting detailed, comprehensive data

0 commit comments

Comments
 (0)