-
-
Notifications
You must be signed in to change notification settings - Fork 689
Add AI Court Simulation Notebook #690
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add AI Court Simulation Notebook #690
Conversation
WalkthroughTwo new Jupyter notebooks have been introduced. The first implements an AI chatbot assistant for answering questions about Chilean government services, integrating translation and web search functionalities. The second notebook provides a lightweight cybersecurity agent that automates the validation of Proof of Concept exploits for CVE vulnerabilities using minimal agent classes and a guided workflow. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Chatbot ("Tomás")
participant FirecrawlTool
participant Translator
User->>Chatbot: Enter query
Chatbot->>Translator: Translate query to Spanish
Translator-->>Chatbot: Spanish query
Chatbot->>FirecrawlTool: Search government services
FirecrawlTool-->>Chatbot: Search results (Spanish)
Chatbot->>Translator: Translate results to English
Translator-->>Chatbot: English results
Chatbot-->>User: Present results
sequenceDiagram
participant User
participant PockyAgent
participant AttackIntentAgent
participant ValidationAgent
User->>PockyAgent: Provide CVE ID
PockyAgent->>AttackIntentAgent: Extract attack intent from CVE description
AttackIntentAgent-->>PockyAgent: Attack intent
PockyAgent->>ValidationAgent: Validate PoC against attack intent
ValidationAgent-->>PockyAgent: Validation result (JSON)
PockyAgent-->>User: Display result
Suggested labels
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Dhivya-Bharathy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces two new AI-powered Jupyter notebooks into the examples/cookbooks
directory. One notebook focuses on providing an AI assistant for government services, while the other demonstrates an AI agent for cybersecurity Proof-of-Concept (PoC) search and validation. It's important to note that the pull request title and description refer to a 'Legalia Ai Mini Court Notebook,' which is not present in the actual changes provided in the patch.
Highlights
- New Cookbook: Chile Government Services Assistant: A new Jupyter notebook (
Chile_Government_Services_Assistant.ipynb
) has been added. This notebook showcases an AI chatbot designed to answer questions about Chilean government services, utilizing the Firecrawl API for web content retrieval anddeep-translator
for language translation. - New Cookbook: Pocky Cybersecurity PoC Agent: Another new Jupyter notebook (
Pocky_Cybersecurity_PoC_Agent.ipynb
) is included. This notebook demonstrates an AI agent capable of automating the search and validation of CVE Proof-of-Concept (PoC) exploits, featuring dummy implementations for attack intent extraction and PoC validation.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces two new Jupyter notebooks: Chile_Government_Services_Assistant.ipynb
and Pocky_Cybersecurity_PoC_Agent.ipynb
. The Chile_Government_Services_Assistant.ipynb
notebook demonstrates an AI chatbot for Chilean government services, while the Pocky_Cybersecurity_PoC_Agent.ipynb
notebook showcases a cybersecurity PoC search and validation agent. Both notebooks are well-structured for demo purposes, clearly indicating placeholder logic where real implementations would go. My review identified a few areas in the Chile Government Services Assistant notebook where code readability and error handling robustness could be improved, primarily by reducing redundancy and adopting a more explicit exception-based error management approach.
" if search_result and hasattr(search_result, 'data') and search_result.data:\n", | ||
" filtered_results = [\n", | ||
" result for result in search_result.data\n", | ||
" if str(result.get(\"url\", \"\")).startswith(\"https://www.chileatiende.gob.cl/fichas\") and not str(result.get(\"url\", \"\")).endswith(\"pdf\")\n", | ||
" ]\n", | ||
" if filtered_results:\n", | ||
" for num, result in enumerate(filtered_results, start=1):\n", | ||
" response_md += self.template.format(\n", | ||
" result_number=num,\n", | ||
" page_title=str(result.get(\"title\", \"\")),\n", | ||
" page_url=str(result.get(\"url\", \"\")),\n", | ||
" page_content=str(result.get(\"markdown\", \"\"))\n", | ||
" )\n", | ||
" return response_md\n", | ||
" else:\n", | ||
" return None\n", | ||
" else:\n", | ||
" return None\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The return None
statements are duplicated within this method. This logic can be simplified by assigning search_result.data
to a variable and then checking filtered_results
once, reducing redundancy and improving readability.
search_result_data = search_result.data if search_result and hasattr(search_result, 'data') else None
if search_result_data:
filtered_results = [
result for result in search_result_data
if str(result.get("url", "")).startswith("https://www.chileatiende.gob.cl/fichas") and not str(result.get("url", "")).endswith("pdf")
]
if filtered_results:
for num, result in enumerate(filtered_results, start=1):
response_md += self.template.format(
result_number=num,
page_title=str(result.get("title", "")),
page_url=str(result.get("url", "")),
page_content=str(result.get("markdown", ""))
)
return response_md
return None
" except Exception as e:\n", | ||
" return f\"Error during search: {e}\"" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Returning an error message as a string makes error handling in the calling code brittle, as it relies on string matching ("Error" not in spanish_answer
). It's generally better practice to raise an exception for unexpected errors, allowing the caller to handle different error conditions explicitly. This improves robustness and clarity.
except Exception as e:
raise RuntimeError(f"Firecrawl search failed: {e}")
" spanish_answer = firecrawl_tool.search(spanish_query)\n", | ||
"\n", | ||
" # Only translate if we got a real answer\n", | ||
" if spanish_answer and isinstance(spanish_answer, str) and spanish_answer.strip() and \"Error\" not in spanish_answer:\n", | ||
" try:\n", | ||
" english_answer = translate_to_english(spanish_answer)\n", | ||
" print(\"\\nTomás (in English):\\n\", english_answer)\n", | ||
" except Exception as e:\n", | ||
" print(f\"\\nTomás: I found information, but couldn't translate it. Here it is in Spanish:\\n{spanish_answer}\\n(Translation error: {e})\")\n", | ||
" else:\n", | ||
" print(\"\\nTomás: Sorry, I couldn't find relevant information. Try rephrasing your question or ask about another service.\")" | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following up on the previous comment, instead of checking for the string 'Error' in the response, it's more robust to catch specific exceptions raised by the firecrawl_tool.search
method. This allows for clearer separation of error conditions (e.g., API errors vs. no results found) and more precise error messages to the user.
try:
spanish_query = translate_to_spanish(user_input)
spanish_answer = firecrawl_tool.search(spanish_query)
if spanish_answer and isinstance(spanish_answer, str) and spanish_answer.strip():
try:
english_answer = translate_to_english(spanish_answer)
print("\nTomás (in English):\n", english_answer)
except Exception as e:
print(f"\nTomás: I found information, but couldn't translate it. Here it is in Spanish:\n{spanish_answer}\n(Translation error: {e})")
else:
print("\nTomás: Sorry, I couldn't find relevant information. Try rephrasing your question or ask about another service.")
except RuntimeError as e:
print(f"\nTomás: An error occurred during the search: {e}. Please try again later.")
except Exception as e:
print(f"\nTomás: An unexpected error occurred: {e}. Please try again later.")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (4)
examples/cookbooks/Pocky_Cybersecurity_PoC_Agent.ipynb (1)
78-80
: Security: Avoid hardcoded API key placeholders.The notebook contains placeholder API keys that users might accidentally commit. Consider using more obvious placeholder text or adding warning comments.
-os.environ["EXA_API_KEY"] = "your api key" -os.environ["OPENAI_API_KEY"] = "your api key" +os.environ["EXA_API_KEY"] = "YOUR_EXA_API_KEY_HERE" # Replace with your actual API key +os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY_HERE" # Replace with your actual API keyexamples/cookbooks/Chile_Government_Services_Assistant.ipynb (3)
69-70
: Security: API key exposure risk.Similar to the other notebook, the API key placeholders could be accidentally committed with real values.
-os.environ['FIRECRAWL_API_KEY'] = "your api key here" -os.environ['OPENAI_API_KEY'] = "your api key here" +os.environ['FIRECRAWL_API_KEY'] = "YOUR_FIRECRAWL_API_KEY_HERE" # Replace with your actual API key +os.environ['OPENAI_API_KEY'] = "YOUR_OPENAI_API_KEY_HERE" # Replace with your actual API key
138-139
: Input validation could be more robust.The search query validation only checks length but doesn't validate content quality or sanitize input.
def search(self, search: str) -> str: - if not search or len(search) < 5: - return "Error: Please provide a valid search query (at least 5 characters)." + if not search or len(search.strip()) < 5: + return "Error: Please provide a valid search query (at least 5 characters)." + + # Basic sanitization + search = search.strip() + if not search.replace(' ', '').isalnum(): + return "Error: Search query contains invalid characters."
150-153
: URL filtering logic is too restrictive.The filtering only allows URLs starting with a specific government domain and excludes PDFs. This might miss relevant information from other authoritative sources.
Consider expanding the filter to include other Chilean government domains:
filtered_results = [ result for result in search_result.data - if str(result.get("url", "")).startswith("https://www.chileatiende.gob.cl/fichas") and not str(result.get("url", "")).endswith("pdf") + if (str(result.get("url", "")).startswith("https://www.chileatiende.gob.cl/") or + str(result.get("url", "")).startswith("https://www.gob.cl/")) and + not str(result.get("url", "")).endswith("pdf") ]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
examples/cookbooks/Chile_Government_Services_Assistant.ipynb
(1 hunks)examples/cookbooks/Pocky_Cybersecurity_PoC_Agent.ipynb
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (6)
- GitHub Check: quick-test
- GitHub Check: test-core (3.11)
- GitHub Check: test-examples
- GitHub Check: Run tests and collect coverage
- GitHub Check: performance-test
- GitHub Check: GitGuardian Security Checks
🔇 Additional comments (6)
examples/cookbooks/Pocky_Cybersecurity_PoC_Agent.ipynb (2)
1-232
: Major inconsistency with PR objectives.This notebook implements a cybersecurity PoC validation tool, but the PR objectives clearly state this should be a "Legalia Ai Mini Court Notebook" that simulates a courtroom environment with legal roles. The delivered content doesn't match the intended purpose.
Likely an incorrect or invalid review comment.
55-55
: ```shell
#!/bin/bashCheck available versions for each package individually
packages=(praisonaiagents exa-py python-dotenv requests beautifulsoup4)
for pkg in "${packages[@]}"; do
echo -e "\n=== $pkg ==="
pip index versions "$pkg" || echo "⚠️ Unable to fetch versions for $pkg"
done</details> <details> <summary>examples/cookbooks/Chile_Government_Services_Assistant.ipynb (4)</summary> `1-300`: **Major inconsistency with PR objectives.** This notebook implements a Chilean government services assistant, which doesn't align with the PR objectives of creating a "Legalia Ai Mini Court Notebook" for courtroom simulation. > Likely an incorrect or invalid review comment. --- `94-109`: **Good error handling in translation functions.** The translation functions properly handle exceptions and provide fallback behavior, which is essential for a user-facing application. --- `131-132`: **Good input validation in FirecrawlTool constructor.** The API key validation prevents runtime errors and provides clear error messages. --- `264-283`: **Well-structured main chat loop.** The chat loop properly handles user input, exit conditions, and error scenarios. The bilingual approach with fallback to Spanish is user-friendly. </details> </blockquote></details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
"class ValidationAgent:\n", | ||
" def __init__(self, input_json):\n", | ||
" self.input_json = input_json\n", | ||
" def run(self):\n", | ||
" # Dummy validation logic for notebook demo\n", | ||
" data = json.loads(self.input_json)\n", | ||
" return True if \"attack_intent\" in data and \"poc_sample\" in data else False\n", | ||
"\n", | ||
"class AttackIntentAgent:\n", | ||
" def __init__(self, description):\n", | ||
" self.description = description\n", | ||
" def run(self):\n", | ||
" # Dummy intent extraction for notebook demo\n", | ||
" return f\"Intent for: {self.description[:50]}...\"" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Dummy implementations lack meaningful functionality.
The agent classes are minimal stubs that don't demonstrate real functionality. The ValidationAgent.run()
method only checks for JSON keys existence, and AttackIntentAgent.run()
returns a truncated string.
Consider implementing more realistic demo logic that shows the intended workflow:
class ValidationAgent:
def __init__(self, input_json):
self.input_json = input_json
def run(self):
- # Dummy validation logic for notebook demo
- data = json.loads(self.input_json)
- return True if "attack_intent" in data and "poc_sample" in data else False
+ # Enhanced demo validation logic
+ try:
+ data = json.loads(self.input_json)
+ if "attack_intent" in data and "poc_sample" in data:
+ # Simulate basic validation checks
+ poc = data["poc_sample"].lower()
+ intent = data["attack_intent"].lower()
+ # Basic keyword matching for demo
+ return any(keyword in poc for keyword in ["exploit", "payload", "vulnerability"])
+ return False
+ except json.JSONDecodeError:
+ return False
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
"class ValidationAgent:\n", | |
" def __init__(self, input_json):\n", | |
" self.input_json = input_json\n", | |
" def run(self):\n", | |
" # Dummy validation logic for notebook demo\n", | |
" data = json.loads(self.input_json)\n", | |
" return True if \"attack_intent\" in data and \"poc_sample\" in data else False\n", | |
"\n", | |
"class AttackIntentAgent:\n", | |
" def __init__(self, description):\n", | |
" self.description = description\n", | |
" def run(self):\n", | |
" # Dummy intent extraction for notebook demo\n", | |
" return f\"Intent for: {self.description[:50]}...\"" | |
class ValidationAgent: | |
def __init__(self, input_json): | |
self.input_json = input_json | |
def run(self): | |
# Enhanced demo validation logic | |
try: | |
data = json.loads(self.input_json) | |
if "attack_intent" in data and "poc_sample" in data: | |
# Simulate basic validation checks | |
poc = data["poc_sample"].lower() | |
intent = data["attack_intent"].lower() | |
# Basic keyword matching for demo | |
return any(keyword in poc for keyword in ["exploit", "payload", "vulnerability"]) | |
return False | |
except json.JSONDecodeError: | |
return False | |
class AttackIntentAgent: | |
def __init__(self, description): | |
self.description = description | |
def run(self): | |
# Dummy intent extraction for notebook demo | |
return f"Intent for: {self.description[:50]}..." |
🤖 Prompt for AI Agents
In examples/cookbooks/Pocky_Cybersecurity_PoC_Agent.ipynb around lines 105 to
118, the ValidationAgent and AttackIntentAgent classes have placeholder methods
that only perform trivial checks or return simple strings. To fix this, enhance
ValidationAgent.run() to perform actual validation logic on the input JSON, such
as verifying data formats or required fields beyond mere key presence.
Similarly, update AttackIntentAgent.run() to implement a basic intent extraction
or analysis based on the description text, demonstrating a meaningful processing
step rather than just returning a truncated string. This will better illustrate
the intended workflow and functionality.
"def run_pocky_for_cve(cve_id):\n", | ||
" # Example: Simulate fetching a description and PoC (replace with real logic)\n", | ||
" description = f\"Description for {cve_id} (replace with real Exa/OpenAI search)\"\n", | ||
" poc_sample = f\"PoC code for {cve_id} (replace with real PoC search)\"\n", | ||
"\n", | ||
" # Stage 2: Attack Intent\n", | ||
" intent = AttackIntentAgent(description).run()\n", | ||
" print(f\"Attack Intent: {intent}\")\n", | ||
"\n", | ||
" # Stage 3: Validation\n", | ||
" validation_input = json.dumps({\"attack_intent\": intent, \"poc_sample\": poc_sample}, indent=2)\n", | ||
" valid = ValidationAgent(validation_input).run()\n", | ||
" print(f\"Validation Result: {valid}\")\n", | ||
" if valid:\n", | ||
" print(f\"PoC for {cve_id} is valid and ready to use.\")\n", | ||
" else:\n", | ||
" print(f\"PoC for {cve_id} failed validation.\")" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Main function uses only placeholder data.
The run_pocky_for_cve
function doesn't integrate with the imported APIs (OpenAI, Exa) and only uses hardcoded placeholder strings, making it ineffective for demonstration purposes.
Implement basic integration with the imported APIs to make the demo functional:
def run_pocky_for_cve(cve_id):
- # Example: Simulate fetching a description and PoC (replace with real logic)
- description = f"Description for {cve_id} (replace with real Exa/OpenAI search)"
- poc_sample = f"PoC code for {cve_id} (replace with real PoC search)"
+ # Basic demo integration with Exa search
+ try:
+ exa = Exa(api_key=os.environ.get("EXA_API_KEY"))
+ search_results = exa.search(f"CVE {cve_id} proof of concept", num_results=1)
+ description = search_results.results[0].text if search_results.results else f"No description found for {cve_id}"
+ poc_sample = f"PoC sample for {cve_id} (extracted from search results)"
+ except Exception as e:
+ description = f"Error fetching description for {cve_id}: {e}"
+ poc_sample = f"Error fetching PoC for {cve_id}"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
"def run_pocky_for_cve(cve_id):\n", | |
" # Example: Simulate fetching a description and PoC (replace with real logic)\n", | |
" description = f\"Description for {cve_id} (replace with real Exa/OpenAI search)\"\n", | |
" poc_sample = f\"PoC code for {cve_id} (replace with real PoC search)\"\n", | |
"\n", | |
" # Stage 2: Attack Intent\n", | |
" intent = AttackIntentAgent(description).run()\n", | |
" print(f\"Attack Intent: {intent}\")\n", | |
"\n", | |
" # Stage 3: Validation\n", | |
" validation_input = json.dumps({\"attack_intent\": intent, \"poc_sample\": poc_sample}, indent=2)\n", | |
" valid = ValidationAgent(validation_input).run()\n", | |
" print(f\"Validation Result: {valid}\")\n", | |
" if valid:\n", | |
" print(f\"PoC for {cve_id} is valid and ready to use.\")\n", | |
" else:\n", | |
" print(f\"PoC for {cve_id} failed validation.\")" | |
def run_pocky_for_cve(cve_id): | |
# Basic demo integration with Exa search | |
try: | |
exa = Exa(api_key=os.environ.get("EXA_API_KEY")) | |
search_results = exa.search(f"CVE {cve_id} proof of concept", num_results=1) | |
description = search_results.results[0].text if search_results.results else f"No description found for {cve_id}" | |
poc_sample = f"PoC sample for {cve_id} (extracted from search results)" | |
except Exception as e: | |
description = f"Error fetching description for {cve_id}: {e}" | |
poc_sample = f"Error fetching PoC for {cve_id}" | |
# Stage 2: Attack Intent | |
intent = AttackIntentAgent(description).run() | |
print(f"Attack Intent: {intent}") | |
# Stage 3: Validation | |
validation_input = json.dumps({"attack_intent": intent, "poc_sample": poc_sample}, indent=2) | |
valid = ValidationAgent(validation_input).run() | |
print(f"Validation Result: {valid}") | |
if valid: | |
print(f"PoC for {cve_id} is valid and ready to use.") | |
else: | |
print(f"PoC for {cve_id} failed validation.") |
🤖 Prompt for AI Agents
In examples/cookbooks/Pocky_Cybersecurity_PoC_Agent.ipynb around lines 174 to
190, the run_pocky_for_cve function uses only placeholder strings for
description and PoC instead of calling the imported OpenAI and Exa APIs. To fix
this, replace the hardcoded description and poc_sample with actual calls to the
OpenAI and Exa APIs to fetch real data based on the cve_id, ensuring the
function demonstrates meaningful integration and functionality.
" if spanish_answer and isinstance(spanish_answer, str) and spanish_answer.strip() and \"Error\" not in spanish_answer:\n", | ||
" try:\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Potential issue with error detection.
The condition "Error" not in spanish_answer
for error detection is fragile and could produce false positives if legitimate content contains the word "Error".
- if spanish_answer and isinstance(spanish_answer, str) and spanish_answer.strip() and "Error" not in spanish_answer:
+ if spanish_answer and isinstance(spanish_answer, str) and spanish_answer.strip() and not spanish_answer.startswith("Error"):
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
" if spanish_answer and isinstance(spanish_answer, str) and spanish_answer.strip() and \"Error\" not in spanish_answer:\n", | |
" try:\n", | |
if spanish_answer and isinstance(spanish_answer, str) and spanish_answer.strip() and not spanish_answer.startswith("Error"): | |
try: |
🤖 Prompt for AI Agents
In examples/cookbooks/Chile_Government_Services_Assistant.ipynb around lines 275
to 276, the current error detection condition checks if the string "Error" is
not in spanish_answer, which is fragile and may cause false positives. Replace
this check with a more robust error detection mechanism, such as using a
specific error flag, error code, or a more precise pattern matching to reliably
identify actual errors without misclassifying valid content.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #690 +/- ##
=======================================
Coverage 14.50% 14.50%
=======================================
Files 25 25
Lines 2517 2517
Branches 357 357
=======================================
Hits 365 365
Misses 2136 2136
Partials 16 16
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
This notebook simulates a simplified courtroom using AI agents powered by GPT-4o-mini. It features a judge, prosecutor, defense attorney, and witness that interact over a 3-day trial. Ideal for demos, it showcases real-time legal roleplay with minimal setup.
Summary by CodeRabbit