diff --git a/.github/workflows/run-generate-llms-txt.yml b/.github/workflows/run-generate-llms-txt.yml new file mode 100644 index 0000000..016ad35 --- /dev/null +++ b/.github/workflows/run-generate-llms-txt.yml @@ -0,0 +1,25 @@ +name: Run generate-llms-txt.py + +on: + pull_request: + branches: + - main + push: + branches: + - main + +jobs: + run-script: + runs-on: ubuntu-latest + + steps: + - name: Checkout repository + uses: actions/checkout@v3 + + - name: Set up Python + uses: actions/setup-python@v4 + with: + python-version: '3.x' + + - name: Run generate-llms-txt.py + run: python generate-llms-txt.py diff --git a/content/docs/ai/connect-mcp-clients-to-gibsonai.md b/content/docs/ai/connect-mcp-clients-to-gibsonai.md index 241198c..7bfb482 100644 --- a/content/docs/ai/connect-mcp-clients-to-gibsonai.md +++ b/content/docs/ai/connect-mcp-clients-to-gibsonai.md @@ -19,19 +19,21 @@ This guide covers the setup for the following MCP Clients: ## Prerequisites - A [GibsonAI account](https://app.gibsonai.com/signup). -- [UV](https://docs.astral.sh/uv/) installed. ## Authentication -You'll need to ensure you're logged in to the Gibson CLI before the MCP server will work. +### Remote MCP Server Authentication -```bash -uvx --from gibson-cli@latest gibson auth login -``` +Remote MCP servers handle authentication automatically through the hosted server. The first time the client initializes GibsonAI's MCP server, it will trigger an OAuth flow: + +1. Your browser will open a GibsonAI page asking you to authorize the "GibsonAI MCP Server" to access your account. +2. Review the requested permissions and click **Authorize**. +3. You should see a success message, and you can close the browser tab. +4. Your MCP client should now be connected to the GibsonAI Remote MCP Server and ready to use. ## Cursor Setup -1. Go to `Cursor` → `Settings` → `Cursor Settings` → `MCP Tools`. +1. Go to `Cursor` → `Settings` → `Cursor Settings` → `MCP & Integrations` → `MCP Tools`. 2. Click `New MCP Server`. 3. Update the configuration to include the following: @@ -39,8 +41,7 @@ uvx --from gibson-cli@latest gibson auth login { "mcpServers": { "gibson": { - "command": "uvx", - "args": ["--from", "gibson-cli@latest", "gibson", "mcp", "run"] + "url": "https://mcp.gibsonai.com" } } } @@ -55,11 +56,9 @@ uvx --from gibson-cli@latest gibson auth login ```json { - "mcpServers": { - "gibson": { - "command": "uvx", - "args": ["--from", "gibson-cli@latest", "gibson", "mcp", "run"] - } + "gibson": { + "command": "npx", + "args": ["mcp-remote", "https://mcp.gibsonai.com"] } } ``` @@ -77,8 +76,8 @@ uvx --from gibson-cli@latest gibson auth login { "mcpServers": { "gibson": { - "command": "uvx", - "args": ["--from", "gibson-cli@latest", "gibson", "mcp", "run"] + "command": "npx", + "args": ["mcp-remote", "https://mcp.gibsonai.com"] } } } @@ -87,7 +86,7 @@ uvx --from gibson-cli@latest gibson auth login ## Claude Code Setup ```sh -claude mcp add gibson -- uvx --from gibson-cli@latest gibson mcp run +claude mcp add gibson -- npx mcp-remote https://mcp.gibsonai.com ``` ```sh @@ -98,8 +97,8 @@ claude mcp get gibson gibson: Scope: Local (private to you in this project) Type: stdio - Command: uvx - Args: --from gibson-cli@latest gibson mcp run + Command: npx + Args: mcp-remote https://mcp.gibsonai.com Environment: To remove this server, run: claude mcp remove "gibson" -s local @@ -115,9 +114,8 @@ To remove this server, run: claude mcp remove "gibson" -s local "inputs": [], "servers": { "gibson": { - "type": "stdio", - "command": "uvx", - "args": ["--from", "gibson-cli@latest", "gibson", "mcp", "run"] + "command": "npx", + "args": ["mcp-remote", "https://mcp.gibsonai.com"] } } } @@ -132,14 +130,14 @@ See the official [GitHub Copilot MCP docs](https://docs.github.com/en/copilot/cu 2. To configure MCP Servers in Cline, you need to modify the `cline_mcp_settings.json` file. Click the **MCP Servers** icon → go to **Installed** → click **Configure MCP Servers** to open the configuration file. -3. Add the following `gibson` server entry inside the `mcpServers` object: +3. Update the configuration to include the following: ```json { "mcpServers": { "gibson": { - "command": "uvx", - "args": ["--from", "gibson-cli@latest", "gibson", "mcp", "run"] + "command": "npx", + "args": ["mcp-remote", "https://mcp.gibsonai.com"] } } } diff --git a/content/docs/ai/mcp-server.md b/content/docs/ai/mcp-server.md index 4d11dc7..f4a0752 100644 --- a/content/docs/ai/mcp-server.md +++ b/content/docs/ai/mcp-server.md @@ -9,7 +9,9 @@ GibsonAI Model Context Protocol (MCP) server allows tools like Cursor, Windsurf, ## Manage GibsonAI with natural language -The Model Context Protocol (MCP) is a standardized way for AI tools to interact with GibsonAI projects and databases using natural language, providing secure and contextual access to your data and infrastructure. +The Model Context Protocol (MCP) is a standardized way for AI tools to interact with GibsonAI projects and databases using natural language, providing secure and contextual access to your data and infrastructure. + +GibsonAI offers a **remote** MCP server on the hosted service with seamless OAuth authentication and no setup required. GibsonAI MCP Server diff --git a/public/llms.txt b/public/llms.txt index f8259da..9126e7e 100644 --- a/public/llms.txt +++ b/public/llms.txt @@ -1404,7 +1404,11 @@ GibsonAI Model Context Protocol (MCP) server allows tools like Cursor, Windsurf, ## Manage GibsonAI with natural language -The Model Context Protocol (MCP) is a standardized way for AI tools to interact with GibsonAI projects and databases using natural language, providing secure and contextual access to your data and infrastructure. +The Model Context Protocol (MCP) is a standardized way for AI tools to interact with GibsonAI projects and databases using natural language, providing secure and contextual access to your data and infrastructure. + +GibsonAI offers both **local** and **remote** MCP server options to suit different security and deployment preferences: +- **Local MCP Server**: Runs on your machine using for maximum privacy and control +- **Remote MCP Server**: Hosted service with seamless OAuth authentication and no setup required GibsonAI MCP Server @@ -1457,21 +1461,54 @@ This guide covers the setup for the following MCP Clients: ## Prerequisites - A [GibsonAI account](https://app.gibsonai.com/signup). -- [UV](https://docs.astral.sh/uv/) installed. +- For local MCP servers: [UV](https://docs.astral.sh/uv/) installed. ## Authentication -You'll need to ensure you're logged in to the Gibson CLI before the MCP server will work. +### Local MCP Server Authentication + +You'll need to ensure you're logged in to the Gibson CLI before the local MCP server will work. ```bash uvx --from gibson-cli@latest gibson auth login ``` +### Remote MCP Server Authentication + +Remote MCP servers handle authentication automatically through the hosted server. The first time the client initializes GibsonAI's MCP server, it will trigger an OAuth flow: + +1. Your browser will open a GibsonAI page asking you to authorize the "GibsonAI MCP Server" to access your account. +2. Review the requested permissions and click **Authorize**. +3. You should see a success message, and you can close the browser tab. +4. Your MCP client should now be connected to the GibsonAI Remote MCP Server and ready to use. + ## Cursor Setup -1. Go to `Cursor` → `Settings` → `Cursor Settings` → `MCP Tools`. +1. Go to `Cursor` → `Settings` → `Cursor Settings` → `MCP & Integrations` → `MCP Tools`. 2. Click `New MCP Server`. -3. Update the configuration to include the following: +3. Choose your preferred configuration: + + + + + +Update the configuration to include the following: + +```json +{ + "mcpServers": { + "gibson": { + "url": "https://mcp.gibsonai.com" + } + } +} +``` + + + + + +Update the configuration to include the following: ```json { @@ -1484,12 +1521,37 @@ uvx --from gibson-cli@latest gibson auth login } ``` + + + + ## Windsurf Setup 1. Go to `Windsurf` → `Settings` → `Windsurf Settings` → `Cascade`. 2. Click `Add server` in the `Model Context Protocol (MCP) Servers` section. 3. In the modal, click `Add custom server`. -4. Update the configuration to include the following: +4. Choose your preferred configuration: + + + + + +Update the configuration to include the following: + +```json +{ + "gibson": { + "command": "npx", + "args": ["mcp-remote", "https://mcp.gibsonai.com"] + } +} +``` + + + + + +Update the configuration to include the following: ```json { @@ -1502,6 +1564,10 @@ uvx --from gibson-cli@latest gibson auth login } ``` + + + + 5. Open the `Cascade` chat and, if necessary, refresh the MCP servers. ## Claude Desktop Setup @@ -1509,7 +1575,30 @@ uvx --from gibson-cli@latest gibson auth login 1. Go to `Claude` → `Settings` → `Developer`. 2. Click `Edit Config`. 3. Open the `claude_desktop_config.json` file. -4. Update the configuration to include the following: +4. Choose your preferred configuration: + + + + + +Update the configuration to include the following: + +```json +{ + "mcpServers": { + "gibson": { + "command": "npx", + "args": ["mcp-remote", "https://mcp.gibsonai.com"] + } + } +} +``` + + + + + +Update the configuration to include the following: ```json { @@ -1522,8 +1611,39 @@ uvx --from gibson-cli@latest gibson auth login } ``` + + + + ## Claude Code Setup + + + + +```sh +claude mcp add gibson -- npx mcp-remote https://mcp.gibsonai.com +``` + +```sh +claude mcp get gibson +``` + +```txt +gibson: + Scope: Local (private to you in this project) + Type: stdio + Command: npx + Args: mcp-remote https://mcp.gibsonai.com + Environment: + +To remove this server, run: claude mcp remove "gibson" -s local +``` + + + + + ```sh claude mcp add gibson -- uvx --from gibson-cli@latest gibson mcp run ``` @@ -1543,10 +1663,38 @@ gibson: To remove this server, run: claude mcp remove "gibson" -s local ``` + + + + ## VS Code + GitHub Copilot Setup 1. Create or open the `.vscode/mcp.json` file. -2. Update the configuration to include the following: +2. Choose your preferred configuration: + + + + + +Update the configuration to include the following: + +```json +{ + "inputs": [], + "servers": { + "gibson": { + "command": "npx", + "args": ["mcp-remote", "https://mcp.gibsonai.com"] + } + } +} +``` + + + + + +Update the configuration to include the following: ```json { @@ -1561,6 +1709,10 @@ To remove this server, run: claude mcp remove "gibson" -s local } ``` + + + + See the official [GitHub Copilot MCP docs](https://docs.github.com/en/copilot/customizing-copilot/extending-copilot-chat-with-mcp#configuring-mcp-servers-in-visual-studio-code) for more information. ## Cline (VS Code Extension) Setup @@ -1570,7 +1722,26 @@ See the official [GitHub Copilot MCP docs](https://docs.github.com/en/copilot/cu 2. To configure MCP Servers in Cline, you need to modify the `cline_mcp_settings.json` file. Click the **MCP Servers** icon → go to **Installed** → click **Configure MCP Servers** to open the configuration file. -3. Add the following `gibson` server entry inside the `mcpServers` object: +3. Choose your preferred configuration and add the `gibson` server entry inside the `mcpServers` object: + + + + + +```json +{ + "mcpServers": { + "gibson": { + "command": "npx", + "args": ["mcp-remote", "https://mcp.gibsonai.com"] + } + } +} +``` + + + + ```json { @@ -1583,6 +1754,10 @@ See the official [GitHub Copilot MCP docs](https://docs.github.com/en/copilot/cu } ``` + + + + See the [Claude Desktop MCP docs](https://modelcontextprotocol.io/quickstart/user) for more information. @@ -2570,2002 +2745,1732 @@ To get started, **open the Data API modal** in your GibsonAI project dashboard a --- -title: Talk to your data -subtitle: Enable natural language interactions with your existing databases for business users -enableTableOfContents: true -updatedOn: '2025-01-08T00:00:00.000Z' +title: AI-Powered App Builders +subtitle: Build faster full-stack apps with prompts and production-grade database. +updatedOn: '2025-07-10T22:31:52.611Z' --- -Enable business users to query existing databases using natural language, eliminating the need for SQL knowledge or technical expertise. Get instant insights from your data through conversational interfaces. + -## How it works + -GibsonAI connects to your existing databases and provides a natural language interface that translates plain English questions into optimized SQL queries. Business users can ask questions in the GibsonAI chat and get immediate answers without involving technical teams. + -## Key Features + -### Natural Language Processing + -- **Conversational Interface**: Ask questions in plain English -- **Context Awareness**: Understands follow-up questions and context -- **Multi-language Support**: Query in multiple languages -- **Auto-correction**: Handles typos and variations in phrasing + -### Smart Query Translation + -- **SQL Generation**: Automatically generates optimized SQL queries -- **Join Intelligence**: Understands table relationships and performs joins -- **Aggregation Logic**: Handles complex calculations and groupings -- **Performance Optimization**: Generates efficient queries for fast results +--- +title: AI Agent Frameworks +subtitle: Build smarter AI agents using popular frameworks that integrate with GibsonAI. +updatedOn: '2025-06-29T22:31:52.611Z' +--- -### Business-Friendly Results + -- **Export Options**: Download results in CSV or SQL formats -- **Historical Queries**: Access previous queries and results + -## Example Queries + -### Sales Analysis + -- "What were our total sales last quarter?" -- "Show me the top 10 customers by revenue" -- "Which products have the highest profit margins?" -- "How many new customers did we acquire this month?" + -### Marketing Insights +--- +title: How to create a SQL Agent with LangChain, LangGraph and GibsonAI +subtitle: Step-by-step guide on how to create a SQL Agent with LangChain, LangGraph and GibsonAI +enableTableOfContents: true +updatedOn: '2025-01-29T22:31:52.611Z' +--- -- "What's the conversion rate for our email campaigns?" -- "Which marketing channels generate the most leads?" -- "Show me website traffic trends over the last 6 months" -- "What's our customer acquisition cost by source?" +This guide will show you how to build a SQL Agent that can **create, modify, and manage databases** using **[GibsonAI MCP Server](https://docs.gibsonai.com/ai/mcp-server)** and **[LangChain](https://langchain.com/)** with **[LangGraph](https://langchain-ai.github.io/langgraph/)**. -### Operations Monitoring +## What You'll Build -- "How many support tickets are still open?" -- "What's our average response time to customer inquiries?" -- "Show me inventory levels for all products" -- "Which suppliers have the best delivery performance?" +- A **SQL Agent** powered by LangChain/LangGraph that can: + - **Create new databases and tables** from natural language prompts. + - **Modify existing schemas** (add, remove, or update columns and tables). + - **Deploy schema changes** to serverless databases (e.g., MySQL). + - **Inspect and query database schemas** with conversational commands. + - **Execute SQL queries** and get formatted results. -## Benefits for Business Teams +## Key Concepts -### Democratized Data Access +- **GibsonAI MCP Server:** Turns natural language prompts into fully functional database schemas. +- **From Prompt to Database:** You can go from describing a database in plain English to having a running schema with deployed APIs in minutes. +- **LangGraph ReAct Agent:** Uses reasoning and action cycles to interact with GibsonAI MCP tools effectively. -- **No SQL Required**: Business users can query data without technical knowledge -- **Self-Service Analytics**: Reduce dependency on technical teams -- **Immediate Insights**: Get answers to questions instantly -- **Reduced Bottlenecks**: Eliminate wait times for data requests +> The **GibsonAI MCP integration with LangChain** uses the official MCP adapters to seamlessly connect LangChain agents with GibsonAI's database management capabilities. -### Improved Decision Making +## Prerequisites -- **Real-time Information**: Access up-to-date data for better decisions -- **Comprehensive Analysis**: Explore data from multiple angles -- **Trend Identification**: Spot patterns and trends quickly -- **Data-Driven Culture**: Encourage data-driven decision making +Before starting, ensure you have: -### Enhanced Productivity +1. **A GibsonAI account** – Sign up at [https://app.gibsonai.com](https://app.gibsonai.com/). +2. **Python 3.9+** installed. +3. **OpenAI API key** (you can get one from [OpenAI](https://platform.openai.com/)). -- **Time Savings**: Skip complex data retrieval processes -- **Focus on Analysis**: Spend more time analyzing results, less time getting data -- **Iterative Exploration**: Ask follow-up questions to dive deeper + -## Getting Started +## Install UV Package Manager -1. **Connect Your Database**: Connect GibsonAI to your existing database using a connection string -2. **Start Exploring**: Begin asking questions about your data in the GibsonAI chat interface +[UV](https://docs.astral.sh/uv/) is needed to run GibsonAI CLI. -## Next Steps +Run: -Ready to enable natural language querying for your business teams? [Get started with GibsonAI](/get-started/signing-up) and transform how your organization interacts with data. +```bash +curl -LsSf https://astral.sh/uv/install.sh | sh +``` +## Install GibsonAI CLI ---- -title: Expose your data to your teams -subtitle: Create secure, controlled access to your data across teams with proper governance -enableTableOfContents: true -updatedOn: '2025-01-08T00:00:00.000Z' ---- - -Coming Soon +The GibsonAI CLI lets you log in and manage projects: +```bash +uvx --from gibson-cli@latest gibson auth login +``` ---- -title: Excel to fully functional DB - Talk to data -subtitle: Transform your Excel spreadsheets into production-ready databases with natural language querying -enableTableOfContents: true -updatedOn: '2025-01-08T00:00:00.000Z' ---- +Log in with your GibsonAI account. -Coming Soon +## Install Python Dependencies +Install LangChain, LangGraph, MCP adapters, and OpenAI libraries: ---- -title: Schema versioning and deployment -subtitle: Manage database schema evolution with GibsonAI's automatic versioning across development and production environments -enableTableOfContents: true -updatedOn: '2025-01-08T00:00:00.000Z' ---- +```bash +pip install mcp langchain-mcp-adapters langgraph langchain-openai +``` -Manage database schema evolution with GibsonAI's automatic versioning across development and production environments. GibsonAI handles all versioning complexity on the backend, enabling safe schema changes and zero-downtime deployments with automatic migration management. +## Set Your OpenAI API Key -## How it works +Export your API key: -GibsonAI automatically handles complex migrations and versioning on the backend with no extra work from the developer. The system manages separate development and production environments, automatically tracking schema changes and enabling safe deployments with built-in rollback capabilities. +```bash +export OPENAI_API_KEY="your_openai_api_key" +``` - +*(Replace `your_openai_api_key` with your real key.)* -Automatic versioning +## Create a Python File -Safe deployments +Create a new Python file (e.g., `agent.py`) and copy this code: -Environment management +```python +import asyncio +import os +from mcp import ClientSession, StdioServerParameters +from mcp.client.stdio import stdio_client +from langchain_mcp_adapters.tools import load_mcp_tools +from langgraph.prebuilt import create_react_agent +from langchain_openai import ChatOpenAI -Model generation +class GibsonAIAgent: + """LangChain + LangGraph agent for GibsonAI database management""" - + def __init__(self): + # Initialize OpenAI model + self.model = ChatOpenAI( + model="gpt-4o", + temperature=0.1, + api_key=os.getenv("OPENAI_API_KEY") + ) -## Key Features + # GibsonAI MCP server parameters + self.server_params = StdioServerParameters( + command="uvx", + args=["--from", "gibson-cli@latest", "gibson", "mcp", "run"] + ) -### Automatic Backend Versioning + async def run_agent(self, message: str) -> None: + """Run the GibsonAI agent with the given message.""" + try: + async with stdio_client(self.server_params) as (read, write): + async with ClientSession(read, write) as session: + # Initialize MCP session + await session.initialize() -- **Seamless Version Control**: All schema changes are automatically versioned -- **Migration Management**: Complex migrations handled entirely by GibsonAI -- **Rollback Capabilities**: Safe rollback to previous schema versions -- **Change Tracking**: Complete audit trail of all schema modifications + # Load all GibsonAI MCP tools + tools = await load_mcp_tools(session) -### Multi-Environment Support + # Create ReAct agent with tools + agent = create_react_agent( + self.model, + tools, + state_modifier="""You are a GibsonAI database assistant. + Help users manage their database projects and schemas. -- **Development Environment**: Safe testing ground for schema changes -- **Production Environment**: Zero-downtime deployments to production -- **Environment Synchronization**: Automatic promotion from development to production -- **Isolation**: Complete separation between environments + Your capabilities include: + - Run SQL queries and get results + - Creating new GibsonAI projects + - Managing database schemas (tables, columns, relationships) + - Deploying schema changes to hosted databases + - Querying database schemas and data + - Providing insights about database structure and best practices + + Always be helpful and explain what you're doing step by step. + When creating schemas, use appropriate data types and constraints. + Consider relationships between tables and suggest indexes where appropriate. + Be conversational and provide clear explanations of your actions.""", + ) -### Data Exploration and Analysis + # Execute the agent + result = await agent.ainvoke( + {"messages": [{"role": "user", "content": message}]} + ) -- **Text-to-SQL**: Ask questions about your data in natural language -- **Gibson Studio**: Intuitive data management UI for running queries -- **Automatic SQL Generation**: GibsonAI generates SQL to answer your questions -- **Real-time Insights**: Immediate data analysis without writing SQL + # Print the response + if "messages" in result: + for msg in result["messages"]: + if hasattr(msg, "content") and msg.content: + print(f"\n🤖 {msg.content}\n") + elif hasattr(msg, "tool_calls") and msg.tool_calls: + for tool_call in msg.tool_calls: + print(f"🛠️ Calling tool: {tool_call['name']}") + if tool_call.get("args"): + print(f" Args: {tool_call['args']}") -## Step-by-step guide + except Exception as e: + print(f"Error running agent: {str(e)}") -### 1. Test changes in development +async def run_gibsonai_agent(message: str) -> None: + """Convenience function to run the GibsonAI agent""" + agent = GibsonAIAgent() + await agent.run_agent(message) -```bash -# Working in development environment -# Make schema changes using natural language -gibson modify products "Add a category_id foreign key and remove the old category_name column" +# Example usage +if __name__ == "__main__": + asyncio.run( + run_gibsonai_agent( + "Create a database for a blog posts platform with users and posts tables." + ) + ) ``` -GibsonAI automatically: - -- Generates the migration script -- Updates Pydantic schemas and SQLAlchemy models -- Deploys to development environment -- Validates the changes - -### 2. Explore your data with text-to-SQL - -Use Gibson Studio to validate your changes: - -- "Show me all products with their new category relationships" -- "Find any products that might have lost category data" -- "What's the distribution of products across categories?" +## Run the Agent -### 3. Deploy to production +Run the script: ```bash -# Working in production environment -# Deploy the validated changes -gibson deploy +python agent.py ``` -GibsonAI handles: - -- Zero-downtime migration -- Automatic rollback if issues occur -- API endpoint updates -- Model regeneration - -### 4. Access your updated data +The agent will: -Integration options: +- Start the local **GibsonAI MCP Server**. +- Use **LangGraph's ReAct agent** to reason about your request. +- Take your prompt (e.g., "Create a database for a blog with users and posts tables"). +- Automatically create a database schema using GibsonAI tools. +- Show you step-by-step what actions it's taking. -- **OpenAPI Spec**: Updated automatically in your project settings -- **Direct Connection**: Connection string available in the UI -- **RESTful APIs**: Base URL `https://api.gibsonai.com` - - SQL queries: `/v1/-/query` - - Table operations: `/v1/-/[table-name-in-kebab-case]` -- **API Documentation**: Always up-to-date in the data API section +## View Your Database -## Use cases +Go to your **GibsonAI Dashboard**: - +[https://app.gibsonai.com](https://app.gibsonai.com/) -Schema updates and migrations +Here, you can: -AI-driven schema generation +- See your database schema. +- Check generated REST APIs for your data. +- Monitor database performance and usage. -Unified API layer + - +## Example Prompts to Try -## What's next? +You can experiment with these prompts: - +- **"Show me the current schema for my project."** +- **"Add a 'products' table with name, price, and description fields."** +- **"Create a 'users' table with authentication fields."** +- **"Deploy my schema changes to production."** +- **"Run a query to show all users from the database."** +- **"Create a new database for an e-commerce platform."** +- **"Add a foreign key relationship between users and posts tables."** +## Advanced Features ---- -title: Schema Updates and Migrations -subtitle: Manage database schema changes with GibsonAI's automatic zero-downtime migration capabilities -enableTableOfContents: true -updatedOn: '2025-01-08T00:00:00.000Z' ---- +### Custom Agent Instructions -Manage database schema changes with GibsonAI's automatic zero-downtime migration capabilities. GibsonAI handles all the complexity of migrations and versioning on the backend, so you can evolve your data model in production without interruptions or manual migration scripts. +You can customize the agent's behavior by modifying the `state_modifier` parameter: -## How it works +```python +agent = create_react_agent( + self.model, + tools, + state_modifier="""You are a specialized e-commerce database expert. + Focus on creating optimized schemas for online stores with proper + indexing and relationships for high-performance queries.""", +) +``` -GibsonAI automatically handles complex migrations and versioning on the backend with no extra work from the developer. Simply describe your schema changes in natural language, and GibsonAI manages all the migration complexity behind the scenes, ensuring zero downtime and safe deployments. +### Error Handling and Logging -## Key Features +Add robust error handling for production use: -### Automatic Migration Management +```python +import logging -- **Zero-Downtime Migrations**: All schema changes are deployed without service interruptions -- **Backend Complexity Handling**: GibsonAI manages migration scripts, rollbacks, and versioning automatically -- **Natural Language Changes**: Describe your schema modifications in plain English -- **Safe Deployments**: Automatic validation and safety checks for all schema changes +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) -### Text-to-SQL Analysis +try: + result = await agent.ainvoke({"messages": [{"role": "user", "content": message}]}) + logger.info("Agent execution completed successfully") +except Exception as e: + logger.error(f"Agent execution failed: {str(e)}") + # Handle specific error cases +``` -- **Data Exploration**: Ask questions about your data in natural language -- **Automatic SQL Generation**: GibsonAI generates the SQL queries to answer your questions -- **Gibson Studio Integration**: Run generated queries in the intuitive data management UI -- **Real-time Insights**: Get immediate answers about your data without writing SQL +### Multiple Project Management -## Step-by-step guide +Create agents that can work with multiple GibsonAI projects: -### 1. Make schema changes with natural language - -```bash -gibson modify users "Add a profile_picture_url column and an is_verified boolean field" +```python +async def run_multi_project_agent(message: str, project_id: str = None) -> None: + """Run agent with specific project context""" + if project_id: + message = f"Working with project {project_id}: {message}" + + agent = GibsonAIAgent() + await agent.run_agent(message) ``` -GibsonAI automatically: - -- Generates the migration script -- Validates the changes -- Deploys with zero downtime -- Updates your Pydantic schemas and SQLAlchemy models - -### 2. Explore your data with text-to-SQL - -Use Gibson Studio to ask questions about your data: - -- "Show me all users who joined in the last 30 days" -- "What's the average order value by customer segment?" -- "Find duplicate email addresses in the users table" - -GibsonAI generates and runs the SQL automatically. - -### 3. Access your data via APIs - -Integration options: - -- **OpenAPI Spec**: Available in your project settings -- **Direct Connection**: Connection string available in the UI -- **RESTful APIs**: Base URL `https://api.gibsonai.com` - - SQL queries: `/v1/-/query` - - Table operations: `/v1/-/[table-name-in-kebab-case]` -- **API Documentation**: Available in the data API section of the UI +## Why LangChain + GibsonAI? -## Use cases +- **Tool Integration:** LangChain's MCP adapters seamlessly connect to GibsonAI's database tools. +- **Reasoning:** LangGraph's ReAct pattern provides intelligent planning and execution. +- **Flexibility:** Easy to extend with additional LangChain tools and chains. +- **Observability:** Built-in logging and debugging capabilities. +- **Production Ready:** Robust error handling and async support. - +### Key Advantages Over Traditional Approaches -Schema versioning and deployment +- **No Complex Prompting:** Skip writing lengthy system prompts to teach your agent SQL operations. GibsonAI's MCP tools handle database interactions automatically, so your agent knows exactly how to create tables, run queries, and manage schemas without custom instruction engineering. -AI-driven schema generation +- **No Custom Tool Development:** Forget building your own database connection tools or SQL execution wrappers. GibsonAI provides pre-built MCP tools that work out-of-the-box with any LangChain agent. -Unified API layer +- **Unified Database Support:** No need to manage separate MCP servers for different databases. GibsonAI handles MySQL today and PostgreSQL support is coming in the next two weeks - all through the same simple interface. - +- **Avoid LangChain SQL Toolkit Issues:** LangChain's built-in SQL database toolkit has known limitations with complex queries, connection management, and error handling. GibsonAI's MCP tools provide a more reliable alternative with better error messages and query optimization. -## What's next? +- **Sandboxed Database Environment:** Your agent can safely run SQL queries in isolated database environments without affecting production data. Each project gets its own secure sandbox, perfect for development and testing. - +## Next Steps +- Explore the [GibsonAI MCP Server documentation](https://docs.gibsonai.com/ai/mcp-server) for advanced features. +- Learn about [LangGraph patterns](https://langchain-ai.github.io/langgraph/) for complex workflows. +- Check out [LangChain's tool ecosystem](https://python.langchain.com/docs/integrations/tools/) for additional capabilities. --- -title: Optimize database with AI assistant -subtitle: Leverage GibsonAI's text-to-SQL and Gibson Studio for database optimization and data analysis +title: How to create a SQL Agent with Agno and GibsonAI +subtitle: Step-by-step guide on how to create a SQL Agent with Agno and GibsonAI enableTableOfContents: true -updatedOn: '2025-01-08T00:00:00.000Z' +updatedOn: '2025-07-28T22:31:52.611Z' --- -Leverage GibsonAI's text-to-SQL capabilities and Gibson Studio for database optimization and data analysis. Ask questions about your data in natural language and get instant SQL queries and insights to optimize your database performance. - -## How it works - -GibsonAI provides text-to-SQL functionality that allows you to ask questions about your data in natural language. The system automatically generates SQL queries to answer your questions, which you can run directly in Gibson Studio, the intuitive data management UI. +This guide will show you how to build a SQL Agent that can **create, modify, and manage databases** using **[GibsonAI MCP Server](https://docs.gibsonai.com/ai/mcp-server)** and **[Agno](https://www.agno.com?utm_source=gibsonai&utm_medium=partner-docs&utm_campaign=partner-technical&utm_content=sql-agent-gibsonai-guide)**. - +## What You’ll Build -Text-to-SQL +- A **SQL Agent** powered by Agno that can: + - **Create new databases and tables** from natural language prompts. + - **Modify existing schemas** (add, remove, or update columns and tables). + - **Deploy schema changes** to serverless databases (e.g., MySQL). + - **Inspect and query database schemas** with conversational commands. -Gibson Studio +## Key Concepts -Instant insights +- **GibsonAI MCP Server:** Turns natural language prompts into fully functional database schemas and exposes **REST APIs** for data access and CRUD operations. +- **From Prompt to Database:** You can go from describing a database in plain English to having a running schema with deployed APIs in minutes. +- **Serverless Data APIs:** Once your schema is created, GibsonAI provides instant endpoints (e.g., `/query` for SQL operations or `/{tablename}` for CRUD). -API integration +> The **GibsonAI MCP integration in Agno** is available in the Agno repo: [GibsonAI MCP Toolkit – agno/cookbook/tools/mcp/gibsonai.py](https://github.com/agno-agi/agno/blob/main/cookbook/tools/mcp/gibsonai.py) +> - +## Prerequisites -## Key Features +Before starting, ensure you have: -### Text-to-SQL Analysis +1. **A GibsonAI account** – Sign up at [https://app.gibsonai.com](https://app.gibsonai.com/). +2. **Python 3.9+** installed. +3. **OpenAI API key** (you can get one from [OpenAI](https://platform.openai.com/)). -- **Natural Language Queries**: Ask questions about your data in plain English -- **Automatic SQL Generation**: GibsonAI generates optimized SQL queries -- **Performance Insights**: Get recommendations for query optimization -- **Data Exploration**: Discover patterns and insights in your data + -### Gibson Studio Integration +## Install UV Package Manager -- **Visual Query Interface**: Run generated SQL queries in an intuitive UI -- **Real-time Results**: See query results instantly -- **Data Visualization**: Visual representations of your query results -- **Query History**: Track and reuse previous queries +[UV](https://docs.astral.sh/uv/) is needed to run GibsonAI CLI. -### Database Optimization +Run: -- **Performance Monitoring**: Track query performance automatically -- **Index Recommendations**: Get suggestions for database indexes -- **Schema Optimization**: Recommendations for schema improvements -- **MySQL Optimization**: Specialized optimizations for MySQL databases +```bash +curl -LsSf https://astral.sh/uv/install.sh | sh +``` -## Step-by-step guide +## Install GibsonAI CLI -### 1. Analyze database performance +The GibsonAI CLI lets you log in and manage projects: -Use text-to-SQL to understand your database performance: +```bash +uvx --from gibson-cli@latest gibson auth login +``` -**Performance Questions:** +Log in with your GibsonAI account. -- "Which queries are taking the longest to execute?" -- "Show me tables with the most frequent access patterns" -- "What's the average response time for API calls to each table?" -- "Find any slow-running queries in the last week" +## Install Python Dependencies -### 2. Optimize data structure +Install Agno, MCP, and OpenAI libraries: -Ask questions about your data structure: +```bash +pip install agno mcp openai +``` -**Structure Analysis:** +## Set Your OpenAI API Key -- "Which tables have the most rows?" -- "Show me tables with duplicate data" -- "Find columns that are rarely used" -- "What's the storage size of each table?" +Export your API key: -### 3. Monitor data quality +```bash -Use Gibson Studio to monitor data quality: +export OPENAI_API_KEY="your_openai_api_key" +``` -**Quality Checks:** +*(Replace `your_openai_api_key` with your real key.)* -- "Find any null values in required fields" -- "Show me duplicate records across tables" -- "Check for data consistency issues" -- "Identify orphaned records" +## Create a Python File -### 4. Access optimized data +Create a new Python file (e.g., `sql_agent.py`) and copy this code: -Integration options for your applications: +```python +import asyncio +from textwrap import dedent -- **OpenAPI Spec**: Available in your project settings -- **Direct Connection**: Connection string available in the UI -- **RESTful APIs**: Base URL `https://api.gibsonai.com` - - SQL queries: `/v1/-/query` - - Table operations: `/v1/-/[table-name-in-kebab-case]` -- **API Documentation**: Available in the data API section of the UI +from agno.agent import Agent +from agno.models.openai import OpenAIChat +from agno.tools.mcp import MCPTools -## Example optimization queries +async def run_gibsonai_agent(message: str) -> None: + """Run the GibsonAI SQL Agent with the given message.""" + async with MCPTools( + "uvx --from gibson-cli@latest gibson mcp run", + timeout_seconds=300, # Longer timeout for database operations + ) as mcp_tools: + agent = Agent( + name="GibsonAIAgent", + model=OpenAIChat(id="gpt-4o"), + tools=[mcp_tools], + description="SQL Agent for managing database projects and schemas", + instructions=dedent("""\ + You are a GibsonAI database assistant. + Help users manage databases and schemas by creating tables, + updating columns, and deploying schema changes. + """), + markdown=True, + show_tool_calls=True, + ) -### Performance Analysis + await agent.aprint_response(message, stream=True) -```sql --- Generated from: "Show me the slowest performing endpoints" -SELECT - endpoint, - AVG(response_time) as avg_response_time, - COUNT(*) as request_count -FROM api_logs -WHERE created_at >= DATE_SUB(NOW(), INTERVAL 7 DAY) -GROUP BY endpoint -ORDER BY avg_response_time DESC -LIMIT 10; +# Example usage +if __name__ == "__main__": + asyncio.run( + run_gibsonai_agent( + "Create a database for a blog with users and posts tables." + ) + ) ``` -### Data Quality Check - -```sql --- Generated from: "Find duplicate email addresses" -SELECT - email, - COUNT(*) as duplicate_count -FROM users -GROUP BY email -HAVING COUNT(*) > 1; -``` +## Run the Agent -### Storage Analysis +Run the script: -```sql --- Generated from: "Which tables use the most storage?" -SELECT - table_name, - ROUND(((data_length + index_length) / 1024 / 1024), 2) as table_size_mb -FROM information_schema.tables -WHERE table_schema = DATABASE() -ORDER BY table_size_mb DESC; +```bash +python sql_agent.py ``` -## Use cases - - +The agent will: -Schema updates and migrations +- Start the **GibsonAI MCP Server**. +- Take your prompt (e.g., "Create a database for a blog with users and posts tables"). +- Automatically create a database schema. -Unified API layer +## View Your Database -RAG schema generation +Go to your **GibsonAI Dashboard**: - +[https://app.gibsonai.com](https://app.gibsonai.com/) -## What's next? - - +Here, you can: +- See your database schema. +- Check generated REST APIs for your data. ---- -title: Development environments for AI agent databases -subtitle: Create and manage development environments for AI agent database testing and iteration -enableTableOfContents: true -updatedOn: '2025-01-08T00:00:00.000Z' ---- + -Create and manage development environments for AI agent database testing and iteration. Use GibsonAI's natural language database management to set up isolated environments for agent development and testing. +## Example Prompts to Try - +You can experiment with: -MCP Integration +- **"Show me the current schema for my project."** +- **"Add a 'products' table with name, price, and description."** +- **"Deploy schema changes to production."** +- **"Create a new database for a task management app."** -Database Management + -CLI Tools +--- +title: How to Create an AI Agent for SQL Queries with CrewAI and GibsonAI +subtitle: Step-by-step guide on how to create a AI Agent for SQL Queries with CrewAI and GibsonAI +enableTableOfContents: true +updatedOn: '2025-07-28T22:31:52.611Z' +--- - +This guide explains how to build an AI Agent using **CrewAI** for orchestrating SQL queries and **GibsonAI** for handling data storage and CRUD operations via its **Data API**. -## Key Features +## What You’ll Build -### Environment Separation +- A **CrewAI Agent** that uses the **GibsonAI Data API** to read and write data. +- You will define tables in GibsonAI, and CrewAI will use its API to **query or insert records**. +- The example provided demonstrates **storing sales contact information** in GibsonAI. -- **Development Database**: Separate database for development and testing -- **Production Database**: Live database for production agents -- **Schema Isolation**: Independent schema evolution in each environment -- **Safe Testing**: Test changes without affecting production data +## Key Concept -### Natural Language Management +- **GibsonAI exposes a REST Data API** for all created tables. +- **CrewAI can query and perform CRUD operations** directly via this API, making it a powerful backend for AI agents. +- The ability to execute **SQL queries via GibsonAI’s `/query` endpoint**. -- **Schema Creation**: Create database schemas using natural language -- **Table Operations**: Add, modify, and remove tables with simple prompts -- **Environment Switching**: Switch between development and production contexts -- **Version Control**: Track schema changes across environments +**GitHub Repo Link:** [Sales Contact Finder (CrewAI + GibsonAI)](https://github.com/GibsonAI/awesome-gibson/tree/main/sales_contact_finder) -## Implementation Examples +## Prerequisites -### Setting Up Development Environment +Before you begin, ensure you have: -```python -# Using Gibson CLI for development environment -# Create development database schema -# gibson modify users "Create a users table with id, username, email, created_at" -# gibson modify conversations "Create conversations table with user_id, agent_id, message, response" -# gibson code models -# gibson merge +1. **A GibsonAI Account** – Sign up at [https://app.gibsonai.com](https://app.gibsonai.com/). +2. **A GibsonAI API Key** – Create a project in GibsonAI and copy the API key from the **Connect tab**. +3. **Python 3.9+** installed. +4. **OpenAI API Key** – [Get one here](https://platform.openai.com/). +5. **Serper.dev API Key** (if using web scraping/search features). -# Test schema changes in development -# gibson modify users "Add last_login column to users table" -# gibson code models -# (Test the changes before applying to production) -``` + -### Environment Configuration +## Generate Your Database Schema in GibsonAI -```python -# Development environment configuration -dev_config = { - "database": "development", - "api_key": "dev_api_key", - "base_url": "https://api.gibsonai.com/v1/-" -} +Use the following prompt in GibsonAI to create the schema: -# Production environment configuration -prod_config = { - "database": "production", - "api_key": "prod_api_key", - "base_url": "https://api.gibsonai.com/v1/-" -} +```bash +I want to create a sales contact aggregator agent. +Generate a “sales_contact” table with fields (company_id, name, title, linkedin_url, phone, email). +Also create a “sales_company” table with fields (name). +All string fields, except name, are nullable. ``` -### Testing Agent Database Operations - -```python -import requests - -class AgentDatabaseTester: - def __init__(self, environment="development"): - self.environment = environment - self.api_key = "dev_api_key" if environment == "development" else "prod_api_key" - self.base_url = "https://api.gibsonai.com/v1/-" - self.headers = {"Authorization": f"Bearer {self.api_key}"} +Click **Deploy** and copy the **API Key**. - def test_user_creation(self): - """Test user creation in development environment""" - test_user = { - "username": "test_user", - "email": "test@example.com", - "created_at": "2024-01-15T10:30:00Z" - } +--- - response = requests.post( - f"{self.base_url}/users", - json=test_user, - headers=self.headers - ) +## Clone the Sales Contact Finder Example - if response.status_code == 201: - print("✓ User creation test passed") - return response.json() - else: - print(f"✗ User creation test failed: {response.status_code}") - return None +This example lives in the **awesome-gibson** repo. Clone it: - def test_conversation_logging(self, user_id): - """Test conversation logging""" - test_conversation = { - "user_id": user_id, - "agent_id": "agent_001", - "message": "Hello, I need help with my order", - "response": "I'd be happy to help with your order. Can you provide your order number?" - } +```bash +git clone https://github.com/GibsonAI/awesome-gibson.git +cd awesome-gibson/sales_contact_finder +``` - response = requests.post( - f"{self.base_url}/conversations", - json=test_conversation, - headers=self.headers - ) +## Configure Your Environment - if response.status_code == 201: - print("✓ Conversation logging test passed") - return response.json() - else: - print(f"✗ Conversation logging test failed: {response.status_code}") - return None +Copy and edit the `.env` file: - def test_data_queries(self): - """Test natural language queries""" - query_request = { - "query": "Show me all users created in the last 24 hours" - } +```bash +cp .env.example .env +``` - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +Fill in: - if response.status_code == 200: - print("✓ Query test passed") - return response.json() - else: - print(f"✗ Query test failed: {response.status_code}") - return None +```bash +GIBSONAI_API_KEY=your_project_api_key +SERPER_API_KEY=your_serper_api_key +OPENAI_API_KEY=your_openai_api_key ``` -### Schema Migration Testing - -```python -def test_schema_migration(): - """Test schema changes in development before production""" +--- - # 1. Test schema modification - # gibson modify users "Add preferences column as JSON to users table" - # gibson code models +## Create and Activate Virtual Environment - # 2. Test data operations with new schema - tester = AgentDatabaseTester("development") +```bash +source .venv/bin/activate # For Windows: .venv\Scripts\activate +``` - # Test creating user with new preferences field - test_user = { - "username": "test_user_2", - "email": "test2@example.com", - "preferences": {"theme": "dark", "notifications": True} - } +--- - response = requests.post( - f"{tester.base_url}/users", - json=test_user, - headers=tester.headers - ) +## Install Dependencies - if response.status_code == 201: - print("✓ Schema migration test passed") - # Now safe to apply to production - # gibson merge # Apply changes to production database - else: - print("✗ Schema migration test failed") - # Rollback changes - # gibson forget last +```bash +uv pip sync pyproject.toml ``` -## Environment Workflows +--- -### Development Workflow +## Implement CrewAI Tool for SQL Operations -```python -# 1. Create new features in development -# gibson modify feature_table "Create new table for feature testing" -# gibson code models +CrewAI will communicate with GibsonAI’s **Data API** for CRUD operations. Below is an example **ContactStorageTool**: -# 2. Test the feature -def test_new_feature(): - # Run tests against development database - pass +```python +import json +import os +import requests +from dotenv import load_dotenv +from pydantic import Field +from crewai.tools import BaseTool -# 3. Validate schema changes -def validate_schema(): - # Check schema integrity - pass +load_dotenv() # Load environment variables from .env -# 4. Deploy to production -# gibson merge # Apply changes to production -``` +class ContactStorageTool(BaseTool): + name: str = "ContactStorageTool" + description: str = """ + Saves contact information in a GibsonAI database using the hosted API. + Expected payload format: + {"company_name": "Company Name", "contacts": [{"name": "Name", "title": "Title", + "linkedin_url": "LinkedIn URL", "phone": "Phone", "email": "Email"}]} + """ -### Agent Testing Workflow + api_base_url: str = Field(description="The base URL of the GibsonAI API") + api_key: str = Field(description="The API key associated with your GibsonAI project") -```python -class AgentEnvironmentTester: def __init__(self): - self.dev_tester = AgentDatabaseTester("development") - self.prod_tester = AgentDatabaseTester("production") - - def run_development_tests(self): - """Run comprehensive tests in development""" - print("Running development environment tests...") - - # Test user operations - user = self.dev_tester.test_user_creation() - if user: - self.dev_tester.test_conversation_logging(user["id"]) + self.api_base_url = "https://api.gibsonai.com/v1/-" + self.api_key = os.getenv("GIBSONAI_API_KEY") + if not self.api_key: + raise ValueError("Missing GIBSONAI_API_KEY environment variable") + super().__init__() - # Test queries - self.dev_tester.test_data_queries() + def _run(self, contact_info: str) -> str: + try: + contact_data = json.loads(contact_info) if isinstance(contact_info, str) else contact_info + company_name = contact_data["company_name"] + contacts = contact_data["contacts"] - print("Development tests completed") + # Insert company + company_payload = {"name": company_name} + response = requests.post( + f"{self.api_base_url}/sales-company", + json=company_payload, + headers={"X-Gibson-API-Key": self.api_key}, + ) + response.raise_for_status() + company_id = response.json()["id"] + print(f"Posted company: {response.status_code}") - def validate_production_readiness(self): - """Validate that changes are ready for production""" - print("Validating production readiness...") + # Insert contacts + for contact in contacts: + contact_payload = { + "company_id": company_id, + "name": contact["name"], + "title": contact["title"], + "linkedin_url": contact["linkedin_url"], + "phone": contact["phone"], + "email": contact["email"], + } + response = requests.post( + f"{self.api_base_url}/sales-contact", + json=contact_payload, + headers={"X-Gibson-API-Key": self.api_key}, + ) + print(f"Posted contact {contact['name']}: {response.status_code}") + except Exception as e: + return f"Failed to post contact: {str(e)}" - # Check schema consistency - # Verify data integrity - # Test performance +``` - print("Production validation completed") +--- - def deploy_to_production(self): - """Deploy validated changes to production""" - print("Deploying to production...") +## Run Your Crew - # Apply schema changes - # gibson merge +Run: - print("Production deployment completed") +```bash +python main.py run ``` -## Use Cases - -### Agent Development +The crew will: -Perfect for: +- Gather data (e.g., sales contacts). +- Use the **GibsonAI Data API** to store the results. -- Testing new agent features -- Validating database schema changes -- Experimenting with different data models -- Debugging agent database interactions +## Check Your Data -### Schema Evolution +Go to the [GibsonAI Dashboard](https://app.gibsonai.com/) to see: -Enable: +- **Sales Company** and **Sales Contact** tables. +- The data stored by the CrewAI agent. -- Safe schema modifications -- Testing complex migrations -- Validating data integrity -- Rolling back problematic changes + -### Team Collaboration + -Support: +--- +title: Talk to your data +subtitle: Enable natural language interactions with your existing databases for business users +enableTableOfContents: true +updatedOn: '2025-01-08T00:00:00.000Z' +--- -- Shared development environments -- Consistent schema management -- Collaborative testing -- Knowledge sharing +Enable business users to query existing databases using natural language, eliminating the need for SQL knowledge or technical expertise. Get instant insights from your data through conversational interfaces. -## Environment Management +## How it works -### Development Environment +GibsonAI connects to your existing databases and provides a natural language interface that translates plain English questions into optimized SQL queries. Business users can ask questions in the GibsonAI chat and get immediate answers without involving technical teams. -```python -# Development environment setup -def setup_dev_environment(): - # Create development database schema - # gibson modify users "Create users table for development testing" - # gibson modify agent_logs "Create agent_logs table for tracking agent behavior" - # gibson code models - # gibson merge +## Key Features - # Populate with test data - create_test_data() +### Natural Language Processing - print("Development environment ready") +- **Conversational Interface**: Ask questions in plain English +- **Context Awareness**: Understands follow-up questions and context +- **Multi-language Support**: Query in multiple languages +- **Auto-correction**: Handles typos and variations in phrasing -def create_test_data(): - """Create test data for development""" - test_users = [ - {"username": "dev_user_1", "email": "dev1@example.com"}, - {"username": "dev_user_2", "email": "dev2@example.com"} - ] +### Smart Query Translation - # Create test users - for user in test_users: - requests.post( - "https://api.gibsonai.com/v1/-/users", - json=user, - headers={"Authorization": "Bearer dev_api_key"} - ) -``` +- **SQL Generation**: Automatically generates optimized SQL queries +- **Join Intelligence**: Understands table relationships and performs joins +- **Aggregation Logic**: Handles complex calculations and groupings +- **Performance Optimization**: Generates efficient queries for fast results -### Production Environment +### Business-Friendly Results -```python -# Production environment management -def prepare_production_deployment(): - """Prepare for production deployment""" +- **Export Options**: Download results in CSV or SQL formats +- **Historical Queries**: Access previous queries and results - # Validate development changes - if validate_development_changes(): - # Apply to production - # gibson merge - print("Production deployment successful") - else: - print("Development validation failed") +## Example Queries -def validate_development_changes(): - """Validate changes before production deployment""" +### Sales Analysis - # Check schema integrity - # Verify data migrations - # Test performance - # Validate security +- "What were our total sales last quarter?" +- "Show me the top 10 customers by revenue" +- "Which products have the highest profit margins?" +- "How many new customers did we acquire this month?" - return True # Return validation result -``` +### Marketing Insights -## Benefits for AI Agent Development +- "What's the conversion rate for our email campaigns?" +- "Which marketing channels generate the most leads?" +- "Show me website traffic trends over the last 6 months" +- "What's our customer acquisition cost by source?" -### Safe Development +### Operations Monitoring -- **Isolated Testing**: Test changes without affecting production -- **Schema Validation**: Validate schema changes before deployment -- **Data Integrity**: Ensure data consistency across environments -- **Risk Mitigation**: Reduce risk of production issues +- "How many support tickets are still open?" +- "What's our average response time to customer inquiries?" +- "Show me inventory levels for all products" +- "Which suppliers have the best delivery performance?" -### Faster Iteration +## Benefits for Business Teams -- **Rapid Development**: Quick schema changes in development -- **Immediate Testing**: Test changes immediately -- **Natural Language**: Use natural language for schema modifications -- **Automated Models**: Automatic generation of Python models +### Democratized Data Access -### Team Collaboration +- **No SQL Required**: Business users can query data without technical knowledge +- **Self-Service Analytics**: Reduce dependency on technical teams +- **Immediate Insights**: Get answers to questions instantly +- **Reduced Bottlenecks**: Eliminate wait times for data requests -- **Shared Environments**: Team members can share development environments -- **Consistent Schemas**: Ensure consistency across team -- **Knowledge Transfer**: Easy knowledge sharing through natural language -- **Collaborative Testing**: Team-based testing and validation +### Improved Decision Making -## Best Practices +- **Real-time Information**: Access up-to-date data for better decisions +- **Comprehensive Analysis**: Explore data from multiple angles +- **Trend Identification**: Spot patterns and trends quickly +- **Data-Driven Culture**: Encourage data-driven decision making -### Environment Separation +### Enhanced Productivity -- **Clear Boundaries**: Keep development and production separate -- **Consistent Naming**: Use consistent naming conventions -- **Access Control**: Proper access controls for each environment -- **Documentation**: Document environment-specific configurations +- **Time Savings**: Skip complex data retrieval processes +- **Focus on Analysis**: Spend more time analyzing results, less time getting data +- **Iterative Exploration**: Ask follow-up questions to dive deeper -### Testing Strategy +## Getting Started -- **Comprehensive Tests**: Test all agent database operations -- **Schema Validation**: Validate schema changes thoroughly -- **Performance Testing**: Test performance in development -- **Security Testing**: Ensure security measures are effective +1. **Connect Your Database**: Connect GibsonAI to your existing database using a connection string +2. **Start Exploring**: Begin asking questions about your data in the GibsonAI chat interface -### Deployment Process +## Next Steps -- **Validation First**: Always validate in development -- **Gradual Rollout**: Deploy changes gradually -- **Monitor Performance**: Monitor production after deployment -- **Rollback Plan**: Have rollback procedures ready +Ready to enable natural language querying for your business teams? [Get started with GibsonAI](/get-started/signing-up) and transform how your organization interacts with data. -## Getting Started -1. **Set up Development Environment**: Create separate development database -2. **Configure API Keys**: Set up separate API keys for each environment -3. **Create Test Schema**: Define initial database schema for testing -4. **Develop Testing Workflow**: Create systematic testing procedures -5. **Validate and Deploy**: Test thoroughly before production deployment +--- +title: Expose your data to your teams +subtitle: Create secure, controlled access to your data across teams with proper governance +enableTableOfContents: true +updatedOn: '2025-01-08T00:00:00.000Z' +--- -## Gibson CLI Commands +Coming Soon -```bash -# Development environment commands -gibson modify table_name "description of changes" -gibson code models -gibson code schemas -# Apply changes to development -gibson merge - -# Reset development environment -gibson forget last -gibson build datastore -``` +--- +title: Excel to fully functional DB - Talk to data +subtitle: Transform your Excel spreadsheets into production-ready databases with natural language querying +enableTableOfContents: true +updatedOn: '2025-01-08T00:00:00.000Z' +--- -Ready to set up development environments for your AI agents? [Get started with GibsonAI](/get-started/signing-up). +Coming Soon --- -title: Database-driven workflows for AI agent decision tracking -subtitle: Use GibsonAI to track and manage AI agent decisions and workflows through database operations +title: Schema versioning and deployment +subtitle: Manage database schema evolution with GibsonAI's automatic versioning across development and production environments enableTableOfContents: true updatedOn: '2025-01-08T00:00:00.000Z' --- -Use GibsonAI to track and manage AI agent decisions and workflows through database operations. Create structured data storage for agent actions, decision logs, and workflow states using natural language database management. +Manage database schema evolution with GibsonAI's automatic versioning across development and production environments. GibsonAI handles all versioning complexity on the backend, enabling safe schema changes and zero-downtime deployments with automatic migration management. + +## How it works + +GibsonAI automatically handles complex migrations and versioning on the backend with no extra work from the developer. The system manages separate development and production environments, automatically tracking schema changes and enabling safe deployments with built-in rollback capabilities. -MCP Integration +Automatic versioning -Database Management +Safe deployments -CLI Tools +Environment management + +Model generation ## Key Features -### Decision Tracking Database - -- **Agent Actions**: Track all agent actions and decisions in structured format -- **Decision Logs**: Store detailed logs of agent decision-making processes -- **Workflow States**: Maintain current state of agent workflows -- **Audit Trail**: Complete audit trail of agent operations +### Automatic Backend Versioning -### Natural Language Database Management +- **Seamless Version Control**: All schema changes are automatically versioned +- **Migration Management**: Complex migrations handled entirely by GibsonAI +- **Rollback Capabilities**: Safe rollback to previous schema versions +- **Change Tracking**: Complete audit trail of all schema modifications -- **Schema Creation**: Create database schemas using natural language -- **Table Management**: Add and modify tables with simple prompts -- **Query Operations**: Query decision data using natural language -- **Relationship Building**: Define relationships between workflow entities +### Multi-Environment Support -### Workflow Data APIs +- **Development Environment**: Safe testing ground for schema changes +- **Production Environment**: Zero-downtime deployments to production +- **Environment Synchronization**: Automatic promotion from development to production +- **Isolation**: Complete separation between environments -- **REST Endpoints**: Auto-generated APIs for workflow data access -- **Query Interface**: Natural language query endpoint for complex analysis -- **Real-time Updates**: Update workflow states in real-time -- **Data Validation**: Built-in validation for workflow data +### Data Exploration and Analysis -## Implementation Examples +- **Text-to-SQL**: Ask questions about your data in natural language +- **Gibson Studio**: Intuitive data management UI for running queries +- **Automatic SQL Generation**: GibsonAI generates SQL to answer your questions +- **Real-time Insights**: Immediate data analysis without writing SQL -### Creating Workflow Database Schema +## Step-by-step guide -```python -# Using Gibson CLI to create workflow tracking schema -# Create decision tracking tables -# gibson modify agent_decisions "Create agent_decisions table with id, agent_id, decision_type, input_data, output_data, confidence_score, timestamp" -# gibson modify workflow_states "Create workflow_states table with id, workflow_id, current_state, previous_state, transition_reason, updated_at" -# gibson modify action_logs "Create action_logs table with id, agent_id, action_type, parameters, result, status, execution_time" -# gibson modify review_flags "Create review_flags table with id, decision_id, flag_reason, priority, status, created_at" +### 1. Test changes in development -# Generate models and apply changes -# gibson code models -# gibson merge +```bash +# Working in development environment +# Make schema changes using natural language +gibson modify products "Add a category_id foreign key and remove the old category_name column" ``` -### Tracking Agent Decisions +GibsonAI automatically: -```python -import requests -from datetime import datetime +- Generates the migration script +- Updates Pydantic schemas and SQLAlchemy models +- Deploys to development environment +- Validates the changes -class AgentDecisionTracker: - def __init__(self, api_key): - self.api_key = api_key - self.base_url = "https://api.gibsonai.com/v1/-" - self.headers = {"Authorization": f"Bearer {api_key}"} +### 2. Explore your data with text-to-SQL - def log_decision(self, agent_id, decision_type, input_data, output_data, confidence_score): - """Log agent decision to database""" - decision_data = { - "agent_id": agent_id, - "decision_type": decision_type, - "input_data": input_data, - "output_data": output_data, - "confidence_score": confidence_score, - "timestamp": datetime.now().isoformat() - } +Use Gibson Studio to validate your changes: - response = requests.post( - f"{self.base_url}/agent-decisions", - json=decision_data, - headers=self.headers - ) +- "Show me all products with their new category relationships" +- "Find any products that might have lost category data" +- "What's the distribution of products across categories?" - if response.status_code == 201: - print(f"Decision logged: {decision_type}") - return response.json() - else: - print(f"Failed to log decision: {response.status_code}") - return None +### 3. Deploy to production - def update_workflow_state(self, workflow_id, new_state, transition_reason): - """Update workflow state""" - # First, get current state - current_response = requests.get( - f"{self.base_url}/workflow-states/{workflow_id}", - headers=self.headers - ) +```bash +# Working in production environment +# Deploy the validated changes +gibson deploy +``` - if current_response.status_code == 200: - current_data = current_response.json() - previous_state = current_data.get("current_state") - else: - previous_state = None +GibsonAI handles: - # Update state - state_data = { - "workflow_id": workflow_id, - "current_state": new_state, - "previous_state": previous_state, - "transition_reason": transition_reason, - "updated_at": datetime.now().isoformat() - } +- Zero-downtime migration +- Automatic rollback if issues occur +- API endpoint updates +- Model regeneration - response = requests.put( - f"{self.base_url}/workflow-states/{workflow_id}", - json=state_data, - headers=self.headers - ) +### 4. Access your updated data - if response.status_code == 200: - print(f"Workflow state updated: {new_state}") - return response.json() - else: - print(f"Failed to update workflow state: {response.status_code}") - return None +Integration options: - def log_action(self, agent_id, action_type, parameters, result, status, execution_time): - """Log agent action""" - action_data = { - "agent_id": agent_id, - "action_type": action_type, - "parameters": parameters, - "result": result, - "status": status, - "execution_time": execution_time - } +- **OpenAPI Spec**: Updated automatically in your project settings +- **Direct Connection**: Connection string available in the UI +- **RESTful APIs**: Base URL `https://api.gibsonai.com` + - SQL queries: `/v1/-/query` + - Table operations: `/v1/-/[table-name-in-kebab-case]` +- **API Documentation**: Always up-to-date in the data API section - response = requests.post( - f"{self.base_url}/action-logs", - json=action_data, - headers=self.headers - ) +## Use cases - if response.status_code == 201: - print(f"Action logged: {action_type}") - return response.json() - else: - print(f"Failed to log action: {response.status_code}") - return None + - def flag_for_review(self, decision_id, flag_reason, priority="medium"): - """Flag decision for review""" - flag_data = { - "decision_id": decision_id, - "flag_reason": flag_reason, - "priority": priority, - "status": "pending", - "created_at": datetime.now().isoformat() - } +Schema updates and migrations - response = requests.post( - f"{self.base_url}/review-flags", - json=flag_data, - headers=self.headers - ) +AI-driven schema generation - if response.status_code == 201: - print(f"Decision flagged for review: {flag_reason}") - return response.json() - else: - print(f"Failed to flag decision: {response.status_code}") - return None -``` +Unified API layer -### Querying Decision Data + -```python -class DecisionAnalyzer: - def __init__(self, api_key): - self.api_key = api_key - self.base_url = "https://api.gibsonai.com/v1/-" - self.headers = {"Authorization": f"Bearer {api_key}"} +## What's next? - def analyze_low_confidence_decisions(self): - """Find decisions with low confidence scores""" - query_request = { - "query": "Show all decisions with confidence score below 0.7 from the last 24 hours" - } + - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) - if response.status_code == 200: - results = response.json() - print(f"Found {len(results)} low confidence decisions") - return results - else: - print(f"Query failed: {response.status_code}") - return None +--- +title: Schema Updates and Migrations +subtitle: Manage database schema changes with GibsonAI's automatic zero-downtime migration capabilities +enableTableOfContents: true +updatedOn: '2025-01-08T00:00:00.000Z' +--- - def get_agent_performance_metrics(self, agent_id): - """Get performance metrics for specific agent""" - query_request = { - "query": f"Calculate average confidence score, success rate, and decision count for agent {agent_id} over the last 7 days" - } +Manage database schema changes with GibsonAI's automatic zero-downtime migration capabilities. GibsonAI handles all the complexity of migrations and versioning on the backend, so you can evolve your data model in production without interruptions or manual migration scripts. - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +## How it works - if response.status_code == 200: - results = response.json() - print(f"Performance metrics for agent {agent_id}") - return results - else: - print(f"Query failed: {response.status_code}") - return None +GibsonAI automatically handles complex migrations and versioning on the backend with no extra work from the developer. Simply describe your schema changes in natural language, and GibsonAI manages all the migration complexity behind the scenes, ensuring zero downtime and safe deployments. - def get_workflow_bottlenecks(self): - """Identify workflow bottlenecks""" - query_request = { - "query": "Show workflow states that have been stuck in the same state for more than 4 hours" - } +## Key Features - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +### Automatic Migration Management - if response.status_code == 200: - results = response.json() - print(f"Found {len(results)} workflow bottlenecks") - return results - else: - print(f"Query failed: {response.status_code}") - return None +- **Zero-Downtime Migrations**: All schema changes are deployed without service interruptions +- **Backend Complexity Handling**: GibsonAI manages migration scripts, rollbacks, and versioning automatically +- **Natural Language Changes**: Describe your schema modifications in plain English +- **Safe Deployments**: Automatic validation and safety checks for all schema changes - def get_pending_reviews(self): - """Get all pending review flags""" - query_request = { - "query": "Show all pending review flags ordered by priority and creation date" - } +### Text-to-SQL Analysis - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +- **Data Exploration**: Ask questions about your data in natural language +- **Automatic SQL Generation**: GibsonAI generates the SQL queries to answer your questions +- **Gibson Studio Integration**: Run generated queries in the intuitive data management UI +- **Real-time Insights**: Get immediate answers about your data without writing SQL - if response.status_code == 200: - results = response.json() - print(f"Found {len(results)} pending reviews") - return results - else: - print(f"Query failed: {response.status_code}") - return None -``` +## Step-by-step guide -### Agent Workflow Integration +### 1. Make schema changes with natural language -```python -class WorkflowAgent: - def __init__(self, agent_id, api_key): - self.agent_id = agent_id - self.decision_tracker = AgentDecisionTracker(api_key) - self.analyzer = DecisionAnalyzer(api_key) +```bash +gibson modify users "Add a profile_picture_url column and an is_verified boolean field" +``` - def make_decision(self, decision_type, input_data): - """Make a decision and log it""" - # Agent decision-making logic here - output_data = self.process_decision(input_data) - confidence_score = self.calculate_confidence(input_data, output_data) +GibsonAI automatically: - # Log the decision - decision_record = self.decision_tracker.log_decision( - self.agent_id, - decision_type, - input_data, - output_data, - confidence_score - ) +- Generates the migration script +- Validates the changes +- Deploys with zero downtime +- Updates your Pydantic schemas and SQLAlchemy models - # Flag for review if confidence is low - if confidence_score < 0.7: - self.decision_tracker.flag_for_review( - decision_record["id"], - "Low confidence score", - "high" - ) +### 2. Explore your data with text-to-SQL - return output_data +Use Gibson Studio to ask questions about your data: - def execute_action(self, action_type, parameters): - """Execute an action and log it""" - start_time = datetime.now() +- "Show me all users who joined in the last 30 days" +- "What's the average order value by customer segment?" +- "Find duplicate email addresses in the users table" - try: - # Execute the action - result = self.perform_action(action_type, parameters) - status = "success" - except Exception as e: - result = str(e) - status = "error" +GibsonAI generates and runs the SQL automatically. - end_time = datetime.now() - execution_time = (end_time - start_time).total_seconds() +### 3. Access your data via APIs - # Log the action - self.decision_tracker.log_action( - self.agent_id, - action_type, - parameters, - result, - status, - execution_time - ) +Integration options: - return result +- **OpenAPI Spec**: Available in your project settings +- **Direct Connection**: Connection string available in the UI +- **RESTful APIs**: Base URL `https://api.gibsonai.com` + - SQL queries: `/v1/-/query` + - Table operations: `/v1/-/[table-name-in-kebab-case]` +- **API Documentation**: Available in the data API section of the UI - def process_decision(self, input_data): - """Process decision (placeholder for actual logic)""" - # Implement actual decision logic here - return {"decision": "approved", "reason": "meets criteria"} +## Use cases - def calculate_confidence(self, input_data, output_data): - """Calculate confidence score (placeholder for actual logic)""" - # Implement confidence calculation logic here - return 0.85 + - def perform_action(self, action_type, parameters): - """Perform action (placeholder for actual logic)""" - # Implement actual action logic here - return f"Action {action_type} completed successfully" -``` +Schema versioning and deployment -### Customer Service Agent Example +AI-driven schema generation -```python -# Create schema for customer service workflow -# gibson modify customer_tickets "Create customer_tickets table with id, customer_id, issue_type, description, priority, status, assigned_agent" -# gibson modify agent_responses "Create agent_responses table with id, ticket_id, agent_id, response_text, response_type, timestamp" -# gibson modify escalation_rules "Create escalation_rules table with id, rule_name, conditions, escalation_action, priority_threshold" -# gibson code models -# gibson merge +Unified API layer -class CustomerServiceAgent(WorkflowAgent): - def handle_customer_ticket(self, ticket_id, customer_message): - """Handle customer service ticket""" + - # Make decision about response - input_data = { - "ticket_id": ticket_id, - "customer_message": customer_message - } +## What's next? - decision = self.make_decision("customer_response", input_data) + - # Execute response action - response_result = self.execute_action("send_response", { - "ticket_id": ticket_id, - "response": decision["response"] - }) - # Update workflow state - self.decision_tracker.update_workflow_state( - ticket_id, - "awaiting_customer_response", - "Agent responded to customer" - ) +--- +title: Optimize database with AI assistant +subtitle: Leverage GibsonAI's text-to-SQL and Gibson Studio for database optimization and data analysis +enableTableOfContents: true +updatedOn: '2025-01-08T00:00:00.000Z' +--- - return response_result +Leverage GibsonAI's text-to-SQL capabilities and Gibson Studio for database optimization and data analysis. Ask questions about your data in natural language and get instant SQL queries and insights to optimize your database performance. - def escalate_ticket(self, ticket_id, escalation_reason): - """Escalate ticket to human agent""" +## How it works - # Log escalation decision - escalation_decision = self.make_decision("escalation", { - "ticket_id": ticket_id, - "reason": escalation_reason - }) +GibsonAI provides text-to-SQL functionality that allows you to ask questions about your data in natural language. The system automatically generates SQL queries to answer your questions, which you can run directly in Gibson Studio, the intuitive data management UI. - # Execute escalation action - escalation_result = self.execute_action("escalate_ticket", { - "ticket_id": ticket_id, - "escalation_reason": escalation_reason - }) + - # Update workflow state - self.decision_tracker.update_workflow_state( - ticket_id, - "escalated", - f"Escalated due to: {escalation_reason}" - ) +Text-to-SQL - return escalation_result -``` +Gibson Studio -## Use Cases +Instant insights -### Decision Auditing +API integration -Perfect for: + -- Tracking all agent decisions with full context -- Maintaining audit logs for compliance -- Analyzing decision patterns and quality -- Identifying areas for improvement +## Key Features -### Workflow Management +### Text-to-SQL Analysis -Enable: +- **Natural Language Queries**: Ask questions about your data in plain English +- **Automatic SQL Generation**: GibsonAI generates optimized SQL queries +- **Performance Insights**: Get recommendations for query optimization +- **Data Exploration**: Discover patterns and insights in your data -- Tracking workflow states and transitions -- Identifying bottlenecks and inefficiencies -- Monitoring workflow performance -- Managing complex multi-step processes +### Gibson Studio Integration -### Performance Analysis +- **Visual Query Interface**: Run generated SQL queries in an intuitive UI +- **Real-time Results**: See query results instantly +- **Data Visualization**: Visual representations of your query results +- **Query History**: Track and reuse previous queries -Support: +### Database Optimization -- Analyzing agent performance metrics -- Identifying high-risk decisions -- Tracking confidence scores and success rates -- Optimizing agent behavior based on data +- **Performance Monitoring**: Track query performance automatically +- **Index Recommendations**: Get suggestions for database indexes +- **Schema Optimization**: Recommendations for schema improvements +- **MySQL Optimization**: Specialized optimizations for MySQL databases -### Quality Assurance +## Step-by-step guide -Allow: +### 1. Analyze database performance -- Flagging decisions for review -- Tracking review processes -- Maintaining quality standards -- Continuous improvement based on feedback +Use text-to-SQL to understand your database performance: -## Benefits for AI Agent Workflows +**Performance Questions:** -### Comprehensive Tracking +- "Which queries are taking the longest to execute?" +- "Show me tables with the most frequent access patterns" +- "What's the average response time for API calls to each table?" +- "Find any slow-running queries in the last week" -- **Full Audit Trail**: Complete record of all agent decisions and actions -- **Structured Data**: Organized data for easy analysis and reporting -- **Natural Language Queries**: Query decision data using natural language -- **Real-time Updates**: Track workflow states in real-time +### 2. Optimize data structure -### Performance Insights +Ask questions about your data structure: -- **Decision Analysis**: Analyze decision patterns and quality -- **Confidence Tracking**: Monitor confidence scores and success rates -- **Bottleneck Identification**: Identify workflow bottlenecks and inefficiencies -- **Performance Metrics**: Track agent performance over time +**Structure Analysis:** -### Scalable Architecture +- "Which tables have the most rows?" +- "Show me tables with duplicate data" +- "Find columns that are rarely used" +- "What's the storage size of each table?" -- **Database Management**: Easily modify schema as workflow needs evolve -- **API Access**: REST APIs for integration with existing systems -- **Natural Language Interface**: Use natural language for complex queries -- **Flexible Data Model**: Adapt to different workflow requirements +### 3. Monitor data quality -## Best Practices +Use Gibson Studio to monitor data quality: -### Data Management +**Quality Checks:** -- **Consistent Logging**: Ensure all decisions and actions are logged consistently -- **Data Quality**: Maintain high-quality data for accurate analysis -- **Retention Policies**: Implement appropriate data retention policies -- **Security**: Secure sensitive workflow data appropriately +- "Find any null values in required fields" +- "Show me duplicate records across tables" +- "Check for data consistency issues" +- "Identify orphaned records" -### Workflow Design +### 4. Access optimized data -- **Clear States**: Define clear workflow states and transitions -- **Decision Criteria**: Establish clear criteria for decision-making -- **Escalation Rules**: Define when and how to escalate decisions -- **Performance Metrics**: Track meaningful performance metrics +Integration options for your applications: -### Analysis and Improvement +- **OpenAPI Spec**: Available in your project settings +- **Direct Connection**: Connection string available in the UI +- **RESTful APIs**: Base URL `https://api.gibsonai.com` + - SQL queries: `/v1/-/query` + - Table operations: `/v1/-/[table-name-in-kebab-case]` +- **API Documentation**: Available in the data API section of the UI -- **Regular Analysis**: Regularly analyze decision data for insights -- **Feedback Loops**: Implement feedback loops for continuous improvement -- **Pattern Recognition**: Identify patterns in decision-making -- **Optimization**: Continuously optimize workflows based on data +## Example optimization queries -## Getting Started +### Performance Analysis -1. **Design Workflow Schema**: Define your workflow and decision tracking schema -2. **Create Database**: Use Gibson CLI to create your database schema -3. **Integrate Tracking**: Add decision tracking to your agent workflows -4. **Analyze Data**: Use natural language queries to analyze decision data -5. **Optimize**: Continuously improve workflows based on insights +```sql +-- Generated from: "Show me the slowest performing endpoints" +SELECT + endpoint, + AVG(response_time) as avg_response_time, + COUNT(*) as request_count +FROM api_logs +WHERE created_at >= DATE_SUB(NOW(), INTERVAL 7 DAY) +GROUP BY endpoint +ORDER BY avg_response_time DESC +LIMIT 10; +``` -## Gibson CLI Commands +### Data Quality Check -```bash -# Create workflow tracking schema -gibson modify table_name "description of workflow table" -gibson code models -gibson merge +```sql +-- Generated from: "Find duplicate email addresses" +SELECT + email, + COUNT(*) as duplicate_count +FROM users +GROUP BY email +HAVING COUNT(*) > 1; +``` -# Generate models for workflow integration -gibson code models -gibson code schemas +### Storage Analysis + +```sql +-- Generated from: "Which tables use the most storage?" +SELECT + table_name, + ROUND(((data_length + index_length) / 1024 / 1024), 2) as table_size_mb +FROM information_schema.tables +WHERE table_schema = DATABASE() +ORDER BY table_size_mb DESC; ``` -Ready to implement database-driven workflow tracking for your AI agents? [Get started with GibsonAI](/get-started/signing-up). +## Use cases + + + +Schema updates and migrations + +Unified API layer + +RAG schema generation + + + +## What's next? + + --- -title: REST APIs for AI Agent Data Access -subtitle: Auto-generated REST APIs that agents can consume for database operations +title: Development environments for AI agent databases +subtitle: Create and manage development environments for AI agent database testing and iteration enableTableOfContents: true updatedOn: '2025-01-08T00:00:00.000Z' --- -Auto-generated REST APIs that agents can consume for database operations. GibsonAI automatically creates REST endpoints based on your database schema, providing immediate API access for AI agents. +Create and manage development environments for AI agent database testing and iteration. Use GibsonAI's natural language database management to set up isolated environments for agent development and testing. -MCP Integration +MCP Integration -Auto-Generated APIs +Database Management -CLI Tools +CLI Tools ## Key Features -### Auto-Generated REST APIs - -- **Schema-Based Generation**: APIs automatically generated from database schemas -- **CRUD Operations**: Full Create, Read, Update, Delete operations -- **Immediate Availability**: APIs available as soon as schema is created -- **Automatic Updates**: APIs update when schema changes - -### Agent-Optimized Endpoints +### Environment Separation -- **RESTful Design**: Standard REST API patterns -- **JSON Response Format**: Consistent JSON responses -- **Error Handling**: Comprehensive error responses -- **Data Validation**: Built-in request validation +- **Development Database**: Separate database for development and testing +- **Production Database**: Live database for production agents +- **Schema Isolation**: Independent schema evolution in each environment +- **Safe Testing**: Test changes without affecting production data -### Text-to-SQL Integration +### Natural Language Management -- **Natural Language Queries**: Convert natural language to SQL -- **Complex Queries**: Handle multi-table joins and aggregations -- **Safe Execution**: Protected query execution -- **Flexible Results**: Return results in agent-friendly formats +- **Schema Creation**: Create database schemas using natural language +- **Table Operations**: Add, modify, and remove tables with simple prompts +- **Environment Switching**: Switch between development and production contexts +- **Version Control**: Track schema changes across environments ## Implementation Examples -### Basic CRUD Operations +### Setting Up Development Environment ```python -import requests - -# Base API URL -BASE_URL = "https://api.gibsonai.com/v1/-" -headers = {"Authorization": "Bearer your_api_key"} - -# GET: Retrieve all records -response = requests.get(f"{BASE_URL}/customers", headers=headers) -customers = response.json() - -# GET: Retrieve specific record -response = requests.get(f"{BASE_URL}/customers/123", headers=headers) -customer = response.json() - -# POST: Create new record -new_customer = { - "name": "John Doe", - "email": "john@example.com", - "phone": "+1-555-0123" -} -response = requests.post(f"{BASE_URL}/customers", json=new_customer, headers=headers) -created_customer = response.json() - -# PUT: Update existing record -updated_data = {"phone": "+1-555-0456"} -response = requests.put(f"{BASE_URL}/customers/123", json=updated_data, headers=headers) -updated_customer = response.json() +# Using Gibson CLI for development environment +# Create development database schema +# gibson modify users "Create a users table with id, username, email, created_at" +# gibson modify conversations "Create conversations table with user_id, agent_id, message, response" +# gibson code models +# gibson merge -# DELETE: Remove record -response = requests.delete(f"{BASE_URL}/customers/123", headers=headers) +# Test schema changes in development +# gibson modify users "Add last_login column to users table" +# gibson code models +# (Test the changes before applying to production) ``` -### Text-to-SQL Queries +### Environment Configuration ```python -# Natural language queries through API -query_request = { - "query": "Show me all customers who placed orders in the last 30 days" +# Development environment configuration +dev_config = { + "database": "development", + "api_key": "dev_api_key", + "base_url": "https://api.gibsonai.com/v1/-" } -response = requests.post( - f"{BASE_URL}/query", - json=query_request, - headers=headers -) - -results = response.json() -print(f"Found {len(results)} customers") - -# Complex analytical queries -query_request = { - "query": "What is the average order value by customer segment for this month?" +# Production environment configuration +prod_config = { + "database": "production", + "api_key": "prod_api_key", + "base_url": "https://api.gibsonai.com/v1/-" } - -response = requests.post( - f"{BASE_URL}/query", - json=query_request, - headers=headers -) - -analytics = response.json() ``` -### Agent Integration Example +### Testing Agent Database Operations ```python -class CustomerServiceAgent: - def __init__(self, api_key): - self.api_key = api_key - self.base_url = "https://api.gibsonai.com/v1/-" - self.headers = {"Authorization": f"Bearer {api_key}"} +import requests - def get_customer_info(self, customer_id): - """Get customer information""" - response = requests.get( - f"{self.base_url}/customers/{customer_id}", +class AgentDatabaseTester: + def __init__(self, environment="development"): + self.environment = environment + self.api_key = "dev_api_key" if environment == "development" else "prod_api_key" + self.base_url = "https://api.gibsonai.com/v1/-" + self.headers = {"Authorization": f"Bearer {self.api_key}"} + + def test_user_creation(self): + """Test user creation in development environment""" + test_user = { + "username": "test_user", + "email": "test@example.com", + "created_at": "2024-01-15T10:30:00Z" + } + + response = requests.post( + f"{self.base_url}/users", + json=test_user, headers=self.headers ) - return response.json() - def get_order_history(self, customer_id): - """Get customer order history using natural language""" - query = f"Show me all orders for customer {customer_id} ordered by date" + if response.status_code == 201: + print("✓ User creation test passed") + return response.json() + else: + print(f"✗ User creation test failed: {response.status_code}") + return None + + def test_conversation_logging(self, user_id): + """Test conversation logging""" + test_conversation = { + "user_id": user_id, + "agent_id": "agent_001", + "message": "Hello, I need help with my order", + "response": "I'd be happy to help with your order. Can you provide your order number?" + } + response = requests.post( - f"{self.base_url}/query", - json={"query": query}, + f"{self.base_url}/conversations", + json=test_conversation, headers=self.headers ) - return response.json() - def create_support_ticket(self, customer_id, issue): - """Create new support ticket""" - ticket_data = { - "customer_id": customer_id, - "issue": issue, - "status": "open", - "created_at": "2024-01-15T10:30:00Z" + if response.status_code == 201: + print("✓ Conversation logging test passed") + return response.json() + else: + print(f"✗ Conversation logging test failed: {response.status_code}") + return None + + def test_data_queries(self): + """Test natural language queries""" + query_request = { + "query": "Show me all users created in the last 24 hours" } + response = requests.post( - f"{self.base_url}/support-tickets", - json=ticket_data, + f"{self.base_url}/query", + json=query_request, headers=self.headers ) - return response.json() + + if response.status_code == 200: + print("✓ Query test passed") + return response.json() + else: + print(f"✗ Query test failed: {response.status_code}") + return None ``` -### Error Handling +### Schema Migration Testing ```python -def safe_api_call(endpoint, method="GET", data=None): - """Safe API call with error handling""" - try: - if method == "GET": - response = requests.get(f"{BASE_URL}/{endpoint}", headers=headers) - elif method == "POST": - response = requests.post(f"{BASE_URL}/{endpoint}", json=data, headers=headers) +def test_schema_migration(): + """Test schema changes in development before production""" - response.raise_for_status() - return response.json() + # 1. Test schema modification + # gibson modify users "Add preferences column as JSON to users table" + # gibson code models - except requests.exceptions.HTTPError as e: - print(f"HTTP Error: {e.response.status_code}") - print(f"Response: {e.response.text}") - return None - except requests.exceptions.RequestException as e: - print(f"Request Error: {e}") - return None + # 2. Test data operations with new schema + tester = AgentDatabaseTester("development") + + # Test creating user with new preferences field + test_user = { + "username": "test_user_2", + "email": "test2@example.com", + "preferences": {"theme": "dark", "notifications": True} + } + + response = requests.post( + f"{tester.base_url}/users", + json=test_user, + headers=tester.headers + ) + + if response.status_code == 201: + print("✓ Schema migration test passed") + # Now safe to apply to production + # gibson merge # Apply changes to production database + else: + print("✗ Schema migration test failed") + # Rollback changes + # gibson forget last ``` -## API Endpoints +## Environment Workflows -### Standard Table Endpoints +### Development Workflow -For each table in your schema, GibsonAI automatically generates: +```python +# 1. Create new features in development +# gibson modify feature_table "Create new table for feature testing" +# gibson code models -``` -GET /v1/-/table-name # List all records -GET /v1/-/table-name/{id} # Get specific record -POST /v1/-/table-name # Create new record -PUT /v1/-/table-name/{id} # Update existing record -DELETE /v1/-/table-name/{id} # Delete record -``` +# 2. Test the feature +def test_new_feature(): + # Run tests against development database + pass -### Special Endpoints +# 3. Validate schema changes +def validate_schema(): + # Check schema integrity + pass -``` -POST /v1/-/query # Text-to-SQL queries -GET /v1/-/schema # Get database schema -GET /v1/-/health # API health check +# 4. Deploy to production +# gibson merge # Apply changes to production ``` -## Query Parameters - -### Filtering and Pagination +### Agent Testing Workflow ```python -# Filter records -params = { - "status": "active", - "created_after": "2024-01-01" -} -response = requests.get(f"{BASE_URL}/customers", params=params, headers=headers) +class AgentEnvironmentTester: + def __init__(self): + self.dev_tester = AgentDatabaseTester("development") + self.prod_tester = AgentDatabaseTester("production") -# Pagination -params = { - "page": 2, - "limit": 50 -} -response = requests.get(f"{BASE_URL}/orders", params=params, headers=headers) + def run_development_tests(self): + """Run comprehensive tests in development""" + print("Running development environment tests...") -# Sorting -params = { - "sort": "created_at", - "order": "desc" -} -response = requests.get(f"{BASE_URL}/products", params=params, headers=headers) -``` + # Test user operations + user = self.dev_tester.test_user_creation() + if user: + self.dev_tester.test_conversation_logging(user["id"]) -## Agent Use Cases + # Test queries + self.dev_tester.test_data_queries() -### Data Retrieval + print("Development tests completed") -Perfect for agents that need to: + def validate_production_readiness(self): + """Validate that changes are ready for production""" + print("Validating production readiness...") -- Look up customer information -- Retrieve order history -- Access product catalogs -- Query analytics data + # Check schema consistency + # Verify data integrity + # Test performance -### Data Creation + print("Production validation completed") -Enable agents to: + def deploy_to_production(self): + """Deploy validated changes to production""" + print("Deploying to production...") -- Create new customer records -- Generate support tickets -- Log user interactions -- Store processed data + # Apply schema changes + # gibson merge -### Data Updates + print("Production deployment completed") +``` -Allow agents to: +## Use Cases -- Update customer preferences -- Modify order statuses -- Change product information -- Track interaction history +### Agent Development -### Complex Queries +Perfect for: -Support agents with: +- Testing new agent features +- Validating database schema changes +- Experimenting with different data models +- Debugging agent database interactions -- Multi-table joins -- Aggregation queries -- Time-based filtering -- Conditional logic +### Schema Evolution -## MCP Server Integration +Enable: -Connect AI tools through MCP server: +- Safe schema modifications +- Testing complex migrations +- Validating data integrity +- Rolling back problematic changes -```python -# Example MCP server configuration for API access -mcp_config = { - "server_name": "gibsonai-api", - "base_url": "https://api.gibsonai.com/v1/-", - "authentication": { - "type": "bearer", - "token": "your_api_key" - }, - "capabilities": [ - "query_database", - "create_records", - "update_records", - "delete_records" +### Team Collaboration + +Support: + +- Shared development environments +- Consistent schema management +- Collaborative testing +- Knowledge sharing + +## Environment Management + +### Development Environment + +```python +# Development environment setup +def setup_dev_environment(): + # Create development database schema + # gibson modify users "Create users table for development testing" + # gibson modify agent_logs "Create agent_logs table for tracking agent behavior" + # gibson code models + # gibson merge + + # Populate with test data + create_test_data() + + print("Development environment ready") + +def create_test_data(): + """Create test data for development""" + test_users = [ + {"username": "dev_user_1", "email": "dev1@example.com"}, + {"username": "dev_user_2", "email": "dev2@example.com"} ] -} + + # Create test users + for user in test_users: + requests.post( + "https://api.gibsonai.com/v1/-/users", + json=user, + headers={"Authorization": "Bearer dev_api_key"} + ) ``` -## Benefits for AI Agents +### Production Environment -- **Immediate Access**: APIs available instantly when schema is created -- **No Coding Required**: Auto-generated based on database schema -- **Natural Language**: Query database using natural language -- **Consistent Interface**: Standard REST API patterns -- **Error Handling**: Built-in error handling and validation -- **Scalable**: Handles high-volume agent requests -- **Secure**: Authentication and authorization built-in +```python +# Production environment management +def prepare_production_deployment(): + """Prepare for production deployment""" -## Getting Started + # Validate development changes + if validate_development_changes(): + # Apply to production + # gibson merge + print("Production deployment successful") + else: + print("Development validation failed") -1. **Create Database Schema**: Use natural language to define your schema -2. **Generate Models**: Create Python models with Gibson CLI -3. **Deploy Schema**: Apply changes to get APIs -4. **Test Endpoints**: Use the auto-generated API endpoints -5. **Connect Agents**: Integrate agents with the APIs +def validate_development_changes(): + """Validate changes before production deployment""" -## OpenAPI Documentation + # Check schema integrity + # Verify data migrations + # Test performance + # Validate security -Each GibsonAI project provides: + return True # Return validation result +``` -- Complete OpenAPI specification -- Interactive API documentation -- Code examples in multiple languages -- Authentication details -- Error response formats +## Benefits for AI Agent Development -Access your OpenAPI spec through the project settings in Gibson Studio. +### Safe Development +- **Isolated Testing**: Test changes without affecting production +- **Schema Validation**: Validate schema changes before deployment +- **Data Integrity**: Ensure data consistency across environments +- **Risk Mitigation**: Reduce risk of production issues ---- -title: Rapid database schema creation for AI agent workflows -subtitle: Create database schemas quickly for different AI agent actions using natural language -enableTableOfContents: true -updatedOn: '2025-01-08T00:00:00.000Z' ---- +### Faster Iteration -Create database schemas quickly for different AI agent actions using natural language. Use GibsonAI's natural language database management to rapidly set up data structures for various agent workflows and use cases. +- **Rapid Development**: Quick schema changes in development +- **Immediate Testing**: Test changes immediately +- **Natural Language**: Use natural language for schema modifications +- **Automated Models**: Automatic generation of Python models -## How it works +### Team Collaboration -GibsonAI enables rapid database schema creation using natural language descriptions. Instead of manually designing database schemas, you can describe what you need in plain English and have GibsonAI generate the appropriate database structure for your agent workflows. +- **Shared Environments**: Team members can share development environments +- **Consistent Schemas**: Ensure consistency across team +- **Knowledge Transfer**: Easy knowledge sharing through natural language +- **Collaborative Testing**: Team-based testing and validation - +## Best Practices -MCP Integration +### Environment Separation -Database Management +- **Clear Boundaries**: Keep development and production separate +- **Consistent Naming**: Use consistent naming conventions +- **Access Control**: Proper access controls for each environment +- **Documentation**: Document environment-specific configurations -CLI Tools +### Testing Strategy - +- **Comprehensive Tests**: Test all agent database operations +- **Schema Validation**: Validate schema changes thoroughly +- **Performance Testing**: Test performance in development +- **Security Testing**: Ensure security measures are effective -## Key Features +### Deployment Process -### Natural Language Schema Creation +- **Validation First**: Always validate in development +- **Gradual Rollout**: Deploy changes gradually +- **Monitor Performance**: Monitor production after deployment +- **Rollback Plan**: Have rollback procedures ready -- **Instant Schema Generation**: Create database schemas from natural language descriptions -- **Table Definition**: Define tables with relationships using simple prompts -- **Data Type Selection**: Automatically choose appropriate data types -- **Index Creation**: Generate indexes for optimal performance +## Getting Started -### Rapid Development +1. **Set up Development Environment**: Create separate development database +2. **Configure API Keys**: Set up separate API keys for each environment +3. **Create Test Schema**: Define initial database schema for testing +4. **Develop Testing Workflow**: Create systematic testing procedures +5. **Validate and Deploy**: Test thoroughly before production deployment -- **Quick Prototyping**: Rapidly prototype database schemas for different use cases -- **Iterative Design**: Easily modify schemas as requirements evolve -- **Immediate APIs**: Get REST APIs instantly when schema is created -- **Model Generation**: Automatically generate Python models +## Gibson CLI Commands -### Flexible Schema Management +```bash +# Development environment commands +gibson modify table_name "description of changes" +gibson code models +gibson code schemas -- **Multiple Projects**: Create different database schemas for different agent workflows -- **Schema Evolution**: Easily modify existing schemas as needs change -- **Testing Environments**: Create test schemas for validation -- **Production Deployment**: Deploy schemas to production when ready +# Apply changes to development +gibson merge -## Use Cases +# Reset development environment +gibson forget last +gibson build datastore +``` -### Agent Workflow Development +Ready to set up development environments for your AI agents? [Get started with GibsonAI](/get-started/signing-up). -Perfect for: -- Creating database schemas for new agent workflows -- Prototyping data structures for different use cases -- Testing schema designs with sample data -- Validating data models before production +--- +title: Database-driven workflows for AI agent decision tracking +subtitle: Use GibsonAI to track and manage AI agent decisions and workflows through database operations +enableTableOfContents: true +updatedOn: '2025-01-08T00:00:00.000Z' +--- -### Rapid Prototyping +Use GibsonAI to track and manage AI agent decisions and workflows through database operations. Create structured data storage for agent actions, decision logs, and workflow states using natural language database management. -Enable: + -- Quick database setup for proof-of-concept projects -- Testing different data models and relationships -- Validating agent data requirements -- Iterating on schema designs +MCP Integration -### Multi-Agent Systems +Database Management -Support: +CLI Tools -- Creating specialized databases for different agent types -- Isolating data for different agent workflows -- Managing complex multi-agent data relationships -- Coordinating data access across agent systems + -## Implementation Examples +## Key Features -### Creating Schema for Customer Service Agent +### Decision Tracking Database -```python -# Using Gibson CLI to create customer service database schema -# Create customer service tables with natural language -# gibson modify customers "Create customers table with id, name, email, phone, account_status, created_at" -# gibson modify support_tickets "Create support_tickets table with id, customer_id, issue_type, description, priority, status, created_at, resolved_at" -# gibson modify agent_responses "Create agent_responses table with id, ticket_id, agent_id, response_text, timestamp, satisfaction_score" -# gibson modify escalations "Create escalations table with id, ticket_id, escalated_to, reason, escalated_at" +- **Agent Actions**: Track all agent actions and decisions in structured format +- **Decision Logs**: Store detailed logs of agent decision-making processes +- **Workflow States**: Maintain current state of agent workflows +- **Audit Trail**: Complete audit trail of agent operations -# Generate models and deploy -# gibson code models -# gibson merge -``` +### Natural Language Database Management -### Creating Schema for E-commerce Agent +- **Schema Creation**: Create database schemas using natural language +- **Table Management**: Add and modify tables with simple prompts +- **Query Operations**: Query decision data using natural language +- **Relationship Building**: Define relationships between workflow entities + +### Workflow Data APIs + +- **REST Endpoints**: Auto-generated APIs for workflow data access +- **Query Interface**: Natural language query endpoint for complex analysis +- **Real-time Updates**: Update workflow states in real-time +- **Data Validation**: Built-in validation for workflow data + +## Implementation Examples + +### Creating Workflow Database Schema ```python -# Create e-commerce database schema quickly -# gibson modify products "Create products table with id, name, description, price, category, stock_quantity, active" -# gibson modify orders "Create orders table with id, customer_id, total_amount, status, payment_method, created_at" -# gibson modify order_items "Create order_items table with id, order_id, product_id, quantity, unit_price" -# gibson modify shopping_carts "Create shopping_carts table with id, customer_id, product_id, quantity, added_at" -# gibson modify reviews "Create reviews table with id, product_id, customer_id, rating, comment, created_at" +# Using Gibson CLI to create workflow tracking schema +# Create decision tracking tables +# gibson modify agent_decisions "Create agent_decisions table with id, agent_id, decision_type, input_data, output_data, confidence_score, timestamp" +# gibson modify workflow_states "Create workflow_states table with id, workflow_id, current_state, previous_state, transition_reason, updated_at" +# gibson modify action_logs "Create action_logs table with id, agent_id, action_type, parameters, result, status, execution_time" +# gibson modify review_flags "Create review_flags table with id, decision_id, flag_reason, priority, status, created_at" -# Generate models and deploy +# Generate models and apply changes # gibson code models # gibson merge ``` -### Creating Schema for Analytics Agent - -```python -# Create analytics database schema -# gibson modify user_events "Create user_events table with id, user_id, event_type, event_data, timestamp, session_id" -# gibson modify page_views "Create page_views table with id, user_id, page_url, referrer, timestamp, duration" -# gibson modify conversions "Create conversions table with id, user_id, conversion_type, value, timestamp" -# gibson modify user_segments "Create user_segments table with id, user_id, segment_name, segment_value, calculated_at" - -# Generate models and deploy -# gibson code models -# gibson merge -``` - -### Quick Schema Creation Framework - -```python -import subprocess -import json - -class QuickSchemaCreator: - def __init__(self): - self.schemas = {} - - def create_schema_for_action(self, action_name, table_descriptions): - """Create database schema for specific agent action""" - print(f"Creating schema for action: {action_name}") - - # Store schema description - self.schemas[action_name] = { - "tables": table_descriptions, - "created_at": datetime.now().isoformat() - } - - # Generate Gibson CLI commands - for table_name, description in table_descriptions.items(): - command = f'gibson modify {table_name} "{description}"' - print(f"Executing: {command}") - - # Note: In real implementation, you would execute the command - # result = subprocess.run(command, shell=True, capture_output=True, text=True) - # if result.returncode != 0: - # print(f"Error creating table {table_name}: {result.stderr}") - # return False - - # Generate models - print("Generating models...") - # subprocess.run("gibson code models", shell=True) - - # Deploy schema - print("Deploying schema...") - # subprocess.run("gibson merge", shell=True) - - print(f"Schema for {action_name} created successfully!") - return True - - def create_customer_service_schema(self): - """Create schema for customer service agent""" - table_descriptions = { - "customers": "Create customers table with id, name, email, phone, account_status, created_at", - "support_tickets": "Create support_tickets table with id, customer_id, issue_type, description, priority, status, created_at, resolved_at", - "agent_responses": "Create agent_responses table with id, ticket_id, agent_id, response_text, timestamp, satisfaction_score", - "escalations": "Create escalations table with id, ticket_id, escalated_to, reason, escalated_at" - } - - return self.create_schema_for_action("customer_service", table_descriptions) - - def create_ecommerce_schema(self): - """Create schema for e-commerce agent""" - table_descriptions = { - "products": "Create products table with id, name, description, price, category, stock_quantity, active", - "orders": "Create orders table with id, customer_id, total_amount, status, payment_method, created_at", - "order_items": "Create order_items table with id, order_id, product_id, quantity, unit_price", - "shopping_carts": "Create shopping_carts table with id, customer_id, product_id, quantity, added_at", - "reviews": "Create reviews table with id, product_id, customer_id, rating, comment, created_at" - } - - return self.create_schema_for_action("ecommerce", table_descriptions) - - def create_analytics_schema(self): - """Create schema for analytics agent""" - table_descriptions = { - "user_events": "Create user_events table with id, user_id, event_type, event_data, timestamp, session_id", - "page_views": "Create page_views table with id, user_id, page_url, referrer, timestamp, duration", - "conversions": "Create conversions table with id, user_id, conversion_type, value, timestamp", - "user_segments": "Create user_segments table with id, user_id, segment_name, segment_value, calculated_at" - } - - return self.create_schema_for_action("analytics", table_descriptions) - - def create_content_management_schema(self): - """Create schema for content management agent""" - table_descriptions = { - "articles": "Create articles table with id, title, content, author_id, category, status, created_at, updated_at", - "comments": "Create comments table with id, article_id, user_id, comment_text, created_at, approved", - "categories": "Create categories table with id, name, description, parent_id", - "tags": "Create tags table with id, name, description", - "article_tags": "Create article_tags table with id, article_id, tag_id" - } - - return self.create_schema_for_action("content_management", table_descriptions) -``` - -### Testing Schema with Sample Data +### Tracking Agent Decisions ```python import requests +from datetime import datetime -class SchemaValidator: +class AgentDecisionTracker: def __init__(self, api_key): self.api_key = api_key self.base_url = "https://api.gibsonai.com/v1/-" self.headers = {"Authorization": f"Bearer {api_key}"} - def validate_schema_with_sample_data(self, schema_name, sample_data): - """Validate schema by inserting sample data""" - print(f"Validating schema: {schema_name}") + def log_decision(self, agent_id, decision_type, input_data, output_data, confidence_score): + """Log agent decision to database""" + decision_data = { + "agent_id": agent_id, + "decision_type": decision_type, + "input_data": input_data, + "output_data": output_data, + "confidence_score": confidence_score, + "timestamp": datetime.now().isoformat() + } - success_count = 0 - error_count = 0 + response = requests.post( + f"{self.base_url}/agent-decisions", + json=decision_data, + headers=self.headers + ) - for table_name, records in sample_data.items(): - print(f"Testing table: {table_name}") + if response.status_code == 201: + print(f"Decision logged: {decision_type}") + return response.json() + else: + print(f"Failed to log decision: {response.status_code}") + return None - for record in records: - try: - response = requests.post( - f"{self.base_url}/{table_name}", - json=record, - headers=self.headers - ) + def update_workflow_state(self, workflow_id, new_state, transition_reason): + """Update workflow state""" + # First, get current state + current_response = requests.get( + f"{self.base_url}/workflow-states/{workflow_id}", + headers=self.headers + ) - if response.status_code == 201: - success_count += 1 - print(f" ✓ Record inserted successfully") - else: - error_count += 1 - print(f" ✗ Failed to insert record: {response.status_code}") + if current_response.status_code == 200: + current_data = current_response.json() + previous_state = current_data.get("current_state") + else: + previous_state = None - except Exception as e: - error_count += 1 - print(f" ✗ Error inserting record: {e}") + # Update state + state_data = { + "workflow_id": workflow_id, + "current_state": new_state, + "previous_state": previous_state, + "transition_reason": transition_reason, + "updated_at": datetime.now().isoformat() + } - print(f"\nValidation complete: {success_count} successful, {error_count} errors") - return error_count == 0 + response = requests.put( + f"{self.base_url}/workflow-states/{workflow_id}", + json=state_data, + headers=self.headers + ) - def test_customer_service_schema(self): - """Test customer service schema with sample data""" - sample_data = { - "customers": [ - {"name": "John Doe", "email": "john@example.com", "phone": "555-0123", "account_status": "active"}, - {"name": "Jane Smith", "email": "jane@example.com", "phone": "555-0456", "account_status": "active"} - ], - "support_tickets": [ - {"customer_id": 1, "issue_type": "billing", "description": "Question about billing", "priority": "medium", "status": "open"}, - {"customer_id": 2, "issue_type": "technical", "description": "Login issues", "priority": "high", "status": "open"} - ] + if response.status_code == 200: + print(f"Workflow state updated: {new_state}") + return response.json() + else: + print(f"Failed to update workflow state: {response.status_code}") + return None + + def log_action(self, agent_id, action_type, parameters, result, status, execution_time): + """Log agent action""" + action_data = { + "agent_id": agent_id, + "action_type": action_type, + "parameters": parameters, + "result": result, + "status": status, + "execution_time": execution_time } - return self.validate_schema_with_sample_data("customer_service", sample_data) + response = requests.post( + f"{self.base_url}/action-logs", + json=action_data, + headers=self.headers + ) - def test_ecommerce_schema(self): - """Test e-commerce schema with sample data""" - sample_data = { - "products": [ - {"name": "Laptop", "description": "High-performance laptop", "price": 999.99, "category": "Electronics", "stock_quantity": 50, "active": True}, - {"name": "Mouse", "description": "Wireless mouse", "price": 29.99, "category": "Electronics", "stock_quantity": 200, "active": True} - ], - "orders": [ - {"customer_id": 1, "total_amount": 1029.98, "status": "completed", "payment_method": "credit_card"}, - {"customer_id": 2, "total_amount": 29.99, "status": "processing", "payment_method": "paypal"} - ] + if response.status_code == 201: + print(f"Action logged: {action_type}") + return response.json() + else: + print(f"Failed to log action: {response.status_code}") + return None + + def flag_for_review(self, decision_id, flag_reason, priority="medium"): + """Flag decision for review""" + flag_data = { + "decision_id": decision_id, + "flag_reason": flag_reason, + "priority": priority, + "status": "pending", + "created_at": datetime.now().isoformat() } - return self.validate_schema_with_sample_data("ecommerce", sample_data) + response = requests.post( + f"{self.base_url}/review-flags", + json=flag_data, + headers=self.headers + ) + + if response.status_code == 201: + print(f"Decision flagged for review: {flag_reason}") + return response.json() + else: + print(f"Failed to flag decision: {response.status_code}") + return None ``` -### Querying Created Schemas +### Querying Decision Data ```python -class SchemaQuerier: +class DecisionAnalyzer: def __init__(self, api_key): self.api_key = api_key self.base_url = "https://api.gibsonai.com/v1/-" self.headers = {"Authorization": f"Bearer {api_key}"} - def query_schema_data(self, query_description): - """Query schema data using natural language""" + def analyze_low_confidence_decisions(self): + """Find decisions with low confidence scores""" query_request = { - "query": query_description + "query": "Show all decisions with confidence score below 0.7 from the last 24 hours" } response = requests.post( @@ -4576,1005 +4481,934 @@ class SchemaQuerier: if response.status_code == 200: results = response.json() - print(f"Query results: {len(results)} records found") + print(f"Found {len(results)} low confidence decisions") return results else: print(f"Query failed: {response.status_code}") return None - def get_customer_service_metrics(self): - """Get customer service metrics""" - return self.query_schema_data( - "Show ticket count by status and average response time for the last 30 days" - ) - - def get_ecommerce_analytics(self): - """Get e-commerce analytics""" - return self.query_schema_data( - "Show total sales, order count, and top-selling products for the last week" - ) + def get_agent_performance_metrics(self, agent_id): + """Get performance metrics for specific agent""" + query_request = { + "query": f"Calculate average confidence score, success rate, and decision count for agent {agent_id} over the last 7 days" + } - def get_user_engagement_data(self): - """Get user engagement data""" - return self.query_schema_data( - "Calculate user engagement metrics including page views, session duration, and conversion rates" + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers ) -``` -## Common Schema Patterns + if response.status_code == 200: + results = response.json() + print(f"Performance metrics for agent {agent_id}") + return results + else: + print(f"Query failed: {response.status_code}") + return None -### User Management Schema + def get_workflow_bottlenecks(self): + """Identify workflow bottlenecks""" + query_request = { + "query": "Show workflow states that have been stuck in the same state for more than 4 hours" + } -```python -def create_user_management_schema(): - """Create user management schema""" - table_descriptions = { - "users": "Create users table with id, username, email, password_hash, first_name, last_name, created_at, last_login", - "user_roles": "Create user_roles table with id, user_id, role_name, granted_at", - "user_sessions": "Create user_sessions table with id, user_id, session_token, expires_at, created_at", - "user_preferences": "Create user_preferences table with id, user_id, preference_key, preference_value" - } + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) - creator = QuickSchemaCreator() - return creator.create_schema_for_action("user_management", table_descriptions) -``` + if response.status_code == 200: + results = response.json() + print(f"Found {len(results)} workflow bottlenecks") + return results + else: + print(f"Query failed: {response.status_code}") + return None -### Inventory Management Schema + def get_pending_reviews(self): + """Get all pending review flags""" + query_request = { + "query": "Show all pending review flags ordered by priority and creation date" + } -```python -def create_inventory_schema(): - """Create inventory management schema""" - table_descriptions = { - "inventory_items": "Create inventory_items table with id, product_id, location, quantity, reserved_quantity, last_updated", - "stock_movements": "Create stock_movements table with id, product_id, movement_type, quantity, location, timestamp, reference", - "suppliers": "Create suppliers table with id, name, contact_email, contact_phone, address", - "purchase_orders": "Create purchase_orders table with id, supplier_id, order_date, status, total_amount" - } + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) - creator = QuickSchemaCreator() - return creator.create_schema_for_action("inventory_management", table_descriptions) + if response.status_code == 200: + results = response.json() + print(f"Found {len(results)} pending reviews") + return results + else: + print(f"Query failed: {response.status_code}") + return None ``` -### Communication Schema +### Agent Workflow Integration ```python -def create_communication_schema(): - """Create communication schema""" - table_descriptions = { - "messages": "Create messages table with id, sender_id, recipient_id, subject, content, sent_at, read_at", - "notifications": "Create notifications table with id, user_id, notification_type, title, message, read, created_at", - "communication_logs": "Create communication_logs table with id, user_id, channel, message, timestamp, status" - } - - creator = QuickSchemaCreator() - return creator.create_schema_for_action("communication", table_descriptions) -``` +class WorkflowAgent: + def __init__(self, agent_id, api_key): + self.agent_id = agent_id + self.decision_tracker = AgentDecisionTracker(api_key) + self.analyzer = DecisionAnalyzer(api_key) -## Benefits for Agent Development + def make_decision(self, decision_type, input_data): + """Make a decision and log it""" + # Agent decision-making logic here + output_data = self.process_decision(input_data) + confidence_score = self.calculate_confidence(input_data, output_data) -### Rapid Development + # Log the decision + decision_record = self.decision_tracker.log_decision( + self.agent_id, + decision_type, + input_data, + output_data, + confidence_score + ) -- **Instant Schema Creation**: Create database schemas in minutes, not hours -- **Natural Language Interface**: Use plain English to describe data structures -- **Immediate APIs**: Get REST APIs as soon as schema is created -- **Quick Iteration**: Easily modify schemas as requirements change + # Flag for review if confidence is low + if confidence_score < 0.7: + self.decision_tracker.flag_for_review( + decision_record["id"], + "Low confidence score", + "high" + ) -### Flexible Architecture + return output_data -- **Multiple Schemas**: Create different schemas for different agent workflows -- **Easy Modification**: Modify existing schemas with simple prompts -- **Test Environments**: Create test schemas for validation -- **Production Deployment**: Deploy schemas when ready + def execute_action(self, action_type, parameters): + """Execute an action and log it""" + start_time = datetime.now() -### Integrated Development + try: + # Execute the action + result = self.perform_action(action_type, parameters) + status = "success" + except Exception as e: + result = str(e) + status = "error" -- **Model Generation**: Automatically generate Python models -- **API Documentation**: Get OpenAPI documentation automatically -- **MCP Integration**: Connect AI tools through MCP server -- **Text-to-SQL**: Query data using natural language + end_time = datetime.now() + execution_time = (end_time - start_time).total_seconds() -## Best Practices + # Log the action + self.decision_tracker.log_action( + self.agent_id, + action_type, + parameters, + result, + status, + execution_time + ) -### Schema Design + return result -- **Clear Descriptions**: Use clear, descriptive natural language -- **Appropriate Relationships**: Define relationships between tables -- **Data Types**: Let GibsonAI choose appropriate data types -- **Indexing**: Consider performance when designing schemas + def process_decision(self, input_data): + """Process decision (placeholder for actual logic)""" + # Implement actual decision logic here + return {"decision": "approved", "reason": "meets criteria"} -### Development Workflow + def calculate_confidence(self, input_data, output_data): + """Calculate confidence score (placeholder for actual logic)""" + # Implement confidence calculation logic here + return 0.85 -- **Start Simple**: Begin with simple schemas and evolve -- **Test Early**: Test schemas with sample data -- **Iterate**: Modify schemas based on testing results -- **Document**: Document schema decisions and changes + def perform_action(self, action_type, parameters): + """Perform action (placeholder for actual logic)""" + # Implement actual action logic here + return f"Action {action_type} completed successfully" +``` -### Production Deployment +### Customer Service Agent Example -- **Validation**: Validate schemas thoroughly before production -- **Backup**: Ensure proper backup and recovery procedures -- **Monitoring**: Monitor schema performance in production -- **Maintenance**: Plan for ongoing schema maintenance +```python +# Create schema for customer service workflow +# gibson modify customer_tickets "Create customer_tickets table with id, customer_id, issue_type, description, priority, status, assigned_agent" +# gibson modify agent_responses "Create agent_responses table with id, ticket_id, agent_id, response_text, response_type, timestamp" +# gibson modify escalation_rules "Create escalation_rules table with id, rule_name, conditions, escalation_action, priority_threshold" +# gibson code models +# gibson merge -## Getting Started +class CustomerServiceAgent(WorkflowAgent): + def handle_customer_ticket(self, ticket_id, customer_message): + """Handle customer service ticket""" -1. **Identify Use Case**: Determine what type of agent workflow you need -2. **Describe Schema**: Use natural language to describe your data structure -3. **Create Schema**: Use Gibson CLI to create the database schema -4. **Test Schema**: Validate with sample data -5. **Deploy**: Deploy to production when ready + # Make decision about response + input_data = { + "ticket_id": ticket_id, + "customer_message": customer_message + } -## Gibson CLI Commands + decision = self.make_decision("customer_response", input_data) -```bash -# Create schema quickly -gibson modify table_name "natural language description" -gibson code models -gibson merge + # Execute response action + response_result = self.execute_action("send_response", { + "ticket_id": ticket_id, + "response": decision["response"] + }) -# Test schema changes -gibson code models -# (test with sample data) -gibson merge + # Update workflow state + self.decision_tracker.update_workflow_state( + ticket_id, + "awaiting_customer_response", + "Agent responded to customer" + ) -# Reset if needed -gibson forget last -``` + return response_result -Ready to create database schemas quickly for your AI agent workflows? [Get started with GibsonAI](/get-started/signing-up). + def escalate_ticket(self, ticket_id, escalation_reason): + """Escalate ticket to human agent""" + # Log escalation decision + escalation_decision = self.make_decision("escalation", { + "ticket_id": ticket_id, + "reason": escalation_reason + }) ---- -title: Tracking data changes for AI agent monitoring -subtitle: Use database operations to track and monitor data changes in AI agent workflows -enableTableOfContents: true -updatedOn: '2025-01-08T00:00:00.000Z' ---- + # Execute escalation action + escalation_result = self.execute_action("escalate_ticket", { + "ticket_id": ticket_id, + "escalation_reason": escalation_reason + }) -Use database operations to track and monitor data changes in AI agent workflows. Create database schemas to log changes, track agent actions, and analyze data patterns using natural language queries. + # Update workflow state + self.decision_tracker.update_workflow_state( + ticket_id, + "escalated", + f"Escalated due to: {escalation_reason}" + ) - + return escalation_result +``` -MCP Integration +## Use Cases -Database Management +### Decision Auditing -CLI Tools +Perfect for: - +- Tracking all agent decisions with full context +- Maintaining audit logs for compliance +- Analyzing decision patterns and quality +- Identifying areas for improvement -## Key Features +### Workflow Management -### Change Tracking Database +Enable: -- **Data Change Logs**: Track all data changes with timestamps and context -- **Agent Action Logging**: Log agent actions and their data impacts -- **Version History**: Maintain version history of data changes -- **Audit Trail**: Complete audit trail of all data modifications +- Tracking workflow states and transitions +- Identifying bottlenecks and inefficiencies +- Monitoring workflow performance +- Managing complex multi-step processes -### Natural Language Monitoring +### Performance Analysis -- **Query Change Data**: Use natural language to query change logs -- **Pattern Analysis**: Analyze data change patterns and trends -- **Impact Assessment**: Assess the impact of data changes on system behavior -- **Reporting**: Generate reports on data changes and agent activity +Support: -### Database Operations +- Analyzing agent performance metrics +- Identifying high-risk decisions +- Tracking confidence scores and success rates +- Optimizing agent behavior based on data -- **Schema Management**: Create schemas to track various types of changes -- **REST API Access**: Access change data through auto-generated APIs -- **Real-time Logging**: Log changes as they occur in real-time -- **Flexible Queries**: Query change data using natural language +### Quality Assurance -## Implementation Examples +Allow: -### Creating Change Tracking Schema +- Flagging decisions for review +- Tracking review processes +- Maintaining quality standards +- Continuous improvement based on feedback -```python -# Using Gibson CLI to create change tracking database -# Create tables for tracking data changes -# gibson modify data_changes "Create data_changes table with id, table_name, record_id, change_type, old_value, new_value, changed_by, timestamp" -# gibson modify agent_actions "Create agent_actions table with id, agent_id, action_type, target_table, target_id, action_data, timestamp" -# gibson modify system_events "Create system_events table with id, event_type, event_data, source, severity, timestamp" -# gibson modify change_summaries "Create change_summaries table with id, date, table_name, change_count, agent_id, summary" +## Benefits for AI Agent Workflows -# Generate models and deploy -# gibson code models -# gibson merge -``` +### Comprehensive Tracking -### Change Tracking System +- **Full Audit Trail**: Complete record of all agent decisions and actions +- **Structured Data**: Organized data for easy analysis and reporting +- **Natural Language Queries**: Query decision data using natural language +- **Real-time Updates**: Track workflow states in real-time -```python -import requests -from datetime import datetime -import json +### Performance Insights -class ChangeTracker: - def __init__(self, api_key): - self.api_key = api_key - self.base_url = "https://api.gibsonai.com/v1/-" - self.headers = {"Authorization": f"Bearer {api_key}"} +- **Decision Analysis**: Analyze decision patterns and quality +- **Confidence Tracking**: Monitor confidence scores and success rates +- **Bottleneck Identification**: Identify workflow bottlenecks and inefficiencies +- **Performance Metrics**: Track agent performance over time - def log_data_change(self, table_name, record_id, change_type, old_value, new_value, changed_by): - """Log a data change""" - change_data = { - "table_name": table_name, - "record_id": record_id, - "change_type": change_type, - "old_value": json.dumps(old_value) if old_value else None, - "new_value": json.dumps(new_value) if new_value else None, - "changed_by": changed_by, - "timestamp": datetime.now().isoformat() - } +### Scalable Architecture - response = requests.post( - f"{self.base_url}/data-changes", - json=change_data, - headers=self.headers - ) +- **Database Management**: Easily modify schema as workflow needs evolve +- **API Access**: REST APIs for integration with existing systems +- **Natural Language Interface**: Use natural language for complex queries +- **Flexible Data Model**: Adapt to different workflow requirements - if response.status_code == 201: - print(f"Data change logged: {change_type} on {table_name}") - return response.json() - else: - print(f"Failed to log data change: {response.status_code}") - return None +## Best Practices - def log_agent_action(self, agent_id, action_type, target_table, target_id, action_data): - """Log an agent action""" - action_data_record = { - "agent_id": agent_id, - "action_type": action_type, - "target_table": target_table, - "target_id": target_id, - "action_data": json.dumps(action_data), - "timestamp": datetime.now().isoformat() - } +### Data Management - response = requests.post( - f"{self.base_url}/agent-actions", - json=action_data_record, - headers=self.headers - ) +- **Consistent Logging**: Ensure all decisions and actions are logged consistently +- **Data Quality**: Maintain high-quality data for accurate analysis +- **Retention Policies**: Implement appropriate data retention policies +- **Security**: Secure sensitive workflow data appropriately - if response.status_code == 201: - print(f"Agent action logged: {action_type} by {agent_id}") - return response.json() - else: - print(f"Failed to log agent action: {response.status_code}") - return None +### Workflow Design - def log_system_event(self, event_type, event_data, source, severity="info"): - """Log a system event""" - event_record = { - "event_type": event_type, - "event_data": json.dumps(event_data), - "source": source, - "severity": severity, - "timestamp": datetime.now().isoformat() - } +- **Clear States**: Define clear workflow states and transitions +- **Decision Criteria**: Establish clear criteria for decision-making +- **Escalation Rules**: Define when and how to escalate decisions +- **Performance Metrics**: Track meaningful performance metrics - response = requests.post( - f"{self.base_url}/system-events", - json=event_record, - headers=self.headers - ) +### Analysis and Improvement - if response.status_code == 201: - print(f"System event logged: {event_type}") - return response.json() - else: - print(f"Failed to log system event: {response.status_code}") - return None +- **Regular Analysis**: Regularly analyze decision data for insights +- **Feedback Loops**: Implement feedback loops for continuous improvement +- **Pattern Recognition**: Identify patterns in decision-making +- **Optimization**: Continuously optimize workflows based on data - def create_change_summary(self, date, table_name, change_count, agent_id, summary): - """Create a change summary""" - summary_data = { - "date": date, - "table_name": table_name, - "change_count": change_count, - "agent_id": agent_id, - "summary": summary - } +## Getting Started - response = requests.post( - f"{self.base_url}/change-summaries", - json=summary_data, - headers=self.headers - ) +1. **Design Workflow Schema**: Define your workflow and decision tracking schema +2. **Create Database**: Use Gibson CLI to create your database schema +3. **Integrate Tracking**: Add decision tracking to your agent workflows +4. **Analyze Data**: Use natural language queries to analyze decision data +5. **Optimize**: Continuously improve workflows based on insights - if response.status_code == 201: - print(f"Change summary created for {date}") - return response.json() - else: - print(f"Failed to create change summary: {response.status_code}") - return None +## Gibson CLI Commands + +```bash +# Create workflow tracking schema +gibson modify table_name "description of workflow table" +gibson code models +gibson merge + +# Generate models for workflow integration +gibson code models +gibson code schemas ``` -### Data Change Monitoring +Ready to implement database-driven workflow tracking for your AI agents? [Get started with GibsonAI](/get-started/signing-up). -```python -class DataChangeMonitor: - def __init__(self, api_key): - self.api_key = api_key - self.base_url = "https://api.gibsonai.com/v1/-" - self.headers = {"Authorization": f"Bearer {api_key}"} - def monitor_recent_changes(self, hours=24): - """Monitor recent data changes""" - query_request = { - "query": f"Show all data changes from the last {hours} hours grouped by table and change type" - } +--- +title: REST APIs for AI Agent Data Access +subtitle: Auto-generated REST APIs that agents can consume for database operations +enableTableOfContents: true +updatedOn: '2025-01-08T00:00:00.000Z' +--- - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +Auto-generated REST APIs that agents can consume for database operations. GibsonAI automatically creates REST endpoints based on your database schema, providing immediate API access for AI agents. - if response.status_code == 200: - results = response.json() - print(f"Recent changes in last {hours} hours:") - for result in results: - print(f" {result}") - return results - else: - print(f"Query failed: {response.status_code}") - return None + - def analyze_change_patterns(self): - """Analyze data change patterns""" - query_request = { - "query": "Analyze data change patterns by agent, table, and time of day for the last 7 days" - } - - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) - - if response.status_code == 200: - results = response.json() - print("Change pattern analysis:") - for result in results: - print(f" {result}") - return results - else: - print(f"Query failed: {response.status_code}") - return None +MCP Integration - def detect_unusual_activity(self): - """Detect unusual data change activity""" - query_request = { - "query": "Find unusual data change activity including high-frequency changes, bulk operations, and changes outside normal hours" - } +Auto-Generated APIs - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +CLI Tools - if response.status_code == 200: - results = response.json() - print("Unusual activity detected:") - for result in results: - print(f" {result}") - return results - else: - print(f"Query failed: {response.status_code}") - return None + - def get_agent_activity_summary(self, agent_id): - """Get activity summary for specific agent""" - query_request = { - "query": f"Show activity summary for agent {agent_id} including actions performed, data changes, and time patterns" - } +## Key Features - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +### Auto-Generated REST APIs - if response.status_code == 200: - results = response.json() - print(f"Activity summary for agent {agent_id}:") - for result in results: - print(f" {result}") - return results - else: - print(f"Query failed: {response.status_code}") - return None -``` +- **Schema-Based Generation**: APIs automatically generated from database schemas +- **CRUD Operations**: Full Create, Read, Update, Delete operations +- **Immediate Availability**: APIs available as soon as schema is created +- **Automatic Updates**: APIs update when schema changes -### Agent Integration with Change Tracking +### Agent-Optimized Endpoints -```python -class MonitoredAgent: - def __init__(self, agent_id, api_key): - self.agent_id = agent_id - self.api_key = api_key - self.base_url = "https://api.gibsonai.com/v1/-" - self.headers = {"Authorization": f"Bearer {api_key}"} - self.change_tracker = ChangeTracker(api_key) +- **RESTful Design**: Standard REST API patterns +- **JSON Response Format**: Consistent JSON responses +- **Error Handling**: Comprehensive error responses +- **Data Validation**: Built-in request validation - def create_record(self, table_name, record_data): - """Create a record with change tracking""" - # Create the record - response = requests.post( - f"{self.base_url}/{table_name}", - json=record_data, - headers=self.headers - ) +### Text-to-SQL Integration - if response.status_code == 201: - created_record = response.json() +- **Natural Language Queries**: Convert natural language to SQL +- **Complex Queries**: Handle multi-table joins and aggregations +- **Safe Execution**: Protected query execution +- **Flexible Results**: Return results in agent-friendly formats - # Log the change - self.change_tracker.log_data_change( - table_name=table_name, - record_id=created_record["id"], - change_type="CREATE", - old_value=None, - new_value=record_data, - changed_by=self.agent_id - ) +## Implementation Examples - # Log the agent action - self.change_tracker.log_agent_action( - agent_id=self.agent_id, - action_type="CREATE_RECORD", - target_table=table_name, - target_id=created_record["id"], - action_data=record_data - ) +### Basic CRUD Operations - print(f"Record created and logged: {table_name}") - return created_record - else: - print(f"Failed to create record: {response.status_code}") - return None +```python +import requests - def update_record(self, table_name, record_id, update_data): - """Update a record with change tracking""" - # Get current record for old_value - current_response = requests.get( - f"{self.base_url}/{table_name}/{record_id}", - headers=self.headers - ) +# Base API URL +BASE_URL = "https://api.gibsonai.com/v1/-" +headers = {"Authorization": "Bearer your_api_key"} - old_value = current_response.json() if current_response.status_code == 200 else None +# GET: Retrieve all records +response = requests.get(f"{BASE_URL}/customers", headers=headers) +customers = response.json() - # Update the record - response = requests.put( - f"{self.base_url}/{table_name}/{record_id}", - json=update_data, - headers=self.headers - ) +# GET: Retrieve specific record +response = requests.get(f"{BASE_URL}/customers/123", headers=headers) +customer = response.json() - if response.status_code == 200: - updated_record = response.json() +# POST: Create new record +new_customer = { + "name": "John Doe", + "email": "john@example.com", + "phone": "+1-555-0123" +} +response = requests.post(f"{BASE_URL}/customers", json=new_customer, headers=headers) +created_customer = response.json() - # Log the change - self.change_tracker.log_data_change( - table_name=table_name, - record_id=record_id, - change_type="UPDATE", - old_value=old_value, - new_value=update_data, - changed_by=self.agent_id - ) +# PUT: Update existing record +updated_data = {"phone": "+1-555-0456"} +response = requests.put(f"{BASE_URL}/customers/123", json=updated_data, headers=headers) +updated_customer = response.json() - # Log the agent action - self.change_tracker.log_agent_action( - agent_id=self.agent_id, - action_type="UPDATE_RECORD", - target_table=table_name, - target_id=record_id, - action_data=update_data - ) +# DELETE: Remove record +response = requests.delete(f"{BASE_URL}/customers/123", headers=headers) +``` - print(f"Record updated and logged: {table_name}/{record_id}") - return updated_record - else: - print(f"Failed to update record: {response.status_code}") - return None +### Text-to-SQL Queries - def delete_record(self, table_name, record_id): - """Delete a record with change tracking""" - # Get current record for old_value - current_response = requests.get( - f"{self.base_url}/{table_name}/{record_id}", - headers=self.headers - ) +```python +# Natural language queries through API +query_request = { + "query": "Show me all customers who placed orders in the last 30 days" +} - old_value = current_response.json() if current_response.status_code == 200 else None +response = requests.post( + f"{BASE_URL}/query", + json=query_request, + headers=headers +) - # Delete the record - response = requests.delete( - f"{self.base_url}/{table_name}/{record_id}", - headers=self.headers - ) +results = response.json() +print(f"Found {len(results)} customers") - if response.status_code == 200: - # Log the change - self.change_tracker.log_data_change( - table_name=table_name, - record_id=record_id, - change_type="DELETE", - old_value=old_value, - new_value=None, - changed_by=self.agent_id - ) +# Complex analytical queries +query_request = { + "query": "What is the average order value by customer segment for this month?" +} - # Log the agent action - self.change_tracker.log_agent_action( - agent_id=self.agent_id, - action_type="DELETE_RECORD", - target_table=table_name, - target_id=record_id, - action_data={"deleted_record": old_value} - ) +response = requests.post( + f"{BASE_URL}/query", + json=query_request, + headers=headers +) - print(f"Record deleted and logged: {table_name}/{record_id}") - return True - else: - print(f"Failed to delete record: {response.status_code}") - return False +analytics = response.json() ``` -### Change Analysis and Reporting +### Agent Integration Example ```python -class ChangeAnalyzer: +class CustomerServiceAgent: def __init__(self, api_key): self.api_key = api_key self.base_url = "https://api.gibsonai.com/v1/-" self.headers = {"Authorization": f"Bearer {api_key}"} - def generate_daily_report(self, date): - """Generate daily change report""" - query_request = { - "query": f"Generate a comprehensive report of all data changes on {date} including counts by table, agent, and change type" - } - - response = requests.post( - f"{self.base_url}/query", - json=query_request, + def get_customer_info(self, customer_id): + """Get customer information""" + response = requests.get( + f"{self.base_url}/customers/{customer_id}", headers=self.headers ) + return response.json() - if response.status_code == 200: - results = response.json() - print(f"Daily report for {date}:") - for result in results: - print(f" {result}") - return results - else: - print(f"Query failed: {response.status_code}") - return None - - def analyze_agent_impact(self, agent_id, days=7): - """Analyze the impact of a specific agent""" - query_request = { - "query": f"Analyze the impact of agent {agent_id} over the last {days} days including records created, updated, deleted, and affected tables" - } - + def get_order_history(self, customer_id): + """Get customer order history using natural language""" + query = f"Show me all orders for customer {customer_id} ordered by date" response = requests.post( f"{self.base_url}/query", - json=query_request, + json={"query": query}, headers=self.headers ) + return response.json() - if response.status_code == 200: - results = response.json() - print(f"Agent {agent_id} impact analysis:") - for result in results: - print(f" {result}") - return results - else: - print(f"Query failed: {response.status_code}") - return None - - def find_data_anomalies(self): - """Find data anomalies and unusual patterns""" - query_request = { - "query": "Find data anomalies including unusual change volumes, unexpected change types, and irregular timing patterns" + def create_support_ticket(self, customer_id, issue): + """Create new support ticket""" + ticket_data = { + "customer_id": customer_id, + "issue": issue, + "status": "open", + "created_at": "2024-01-15T10:30:00Z" } - response = requests.post( - f"{self.base_url}/query", - json=query_request, + f"{self.base_url}/support-tickets", + json=ticket_data, headers=self.headers ) + return response.json() +``` - if response.status_code == 200: - results = response.json() - print("Data anomalies found:") - for result in results: - print(f" {result}") - return results - else: - print(f"Query failed: {response.status_code}") - return None +### Error Handling - def track_data_evolution(self, table_name, record_id): - """Track the evolution of a specific record""" - query_request = { - "query": f"Show the complete change history for record {record_id} in table {table_name} ordered by timestamp" - } +```python +def safe_api_call(endpoint, method="GET", data=None): + """Safe API call with error handling""" + try: + if method == "GET": + response = requests.get(f"{BASE_URL}/{endpoint}", headers=headers) + elif method == "POST": + response = requests.post(f"{BASE_URL}/{endpoint}", json=data, headers=headers) - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) + response.raise_for_status() + return response.json() - if response.status_code == 200: - results = response.json() - print(f"Change history for {table_name}/{record_id}:") - for result in results: - print(f" {result}") - return results - else: - print(f"Query failed: {response.status_code}") - return None + except requests.exceptions.HTTPError as e: + print(f"HTTP Error: {e.response.status_code}") + print(f"Response: {e.response.text}") + return None + except requests.exceptions.RequestException as e: + print(f"Request Error: {e}") + return None ``` -## Use Cases +## API Endpoints -### Agent Monitoring +### Standard Table Endpoints -Perfect for: +For each table in your schema, GibsonAI automatically generates: -- Tracking agent actions and their data impacts -- Monitoring agent performance and behavior -- Auditing agent operations for compliance -- Identifying agent-related issues or problems +``` +GET /v1/-/table-name # List all records +GET /v1/-/table-name/{id} # Get specific record +POST /v1/-/table-name # Create new record +PUT /v1/-/table-name/{id} # Update existing record +DELETE /v1/-/table-name/{id} # Delete record +``` -### Data Governance +### Special Endpoints -Enable: +``` +POST /v1/-/query # Text-to-SQL queries +GET /v1/-/schema # Get database schema +GET /v1/-/health # API health check +``` -- Maintaining complete audit trails of data changes -- Tracking data lineage and transformation -- Ensuring data quality and consistency -- Supporting compliance requirements +## Query Parameters -### System Analysis +### Filtering and Pagination -Support: +```python +# Filter records +params = { + "status": "active", + "created_after": "2024-01-01" +} +response = requests.get(f"{BASE_URL}/customers", params=params, headers=headers) -- Analyzing data change patterns and trends -- Identifying system performance issues -- Understanding data usage patterns -- Optimizing system performance +# Pagination +params = { + "page": 2, + "limit": 50 +} +response = requests.get(f"{BASE_URL}/orders", params=params, headers=headers) -### Problem Diagnosis +# Sorting +params = { + "sort": "created_at", + "order": "desc" +} +response = requests.get(f"{BASE_URL}/products", params=params, headers=headers) +``` -Allow: +## Agent Use Cases -- Investigating data-related issues -- Tracking down the source of problems -- Analyzing the impact of changes -- Supporting troubleshooting efforts +### Data Retrieval -## Benefits for AI Agent Systems +Perfect for agents that need to: -### Comprehensive Tracking +- Look up customer information +- Retrieve order history +- Access product catalogs +- Query analytics data -- **Complete Audit Trail**: Full record of all data changes and agent actions -- **Natural Language Queries**: Query change data using natural language -- **Pattern Analysis**: Analyze patterns in data changes and agent behavior -- **Impact Assessment**: Understand the impact of changes on system behavior +### Data Creation -### Monitoring and Analysis +Enable agents to: -- **Real-time Logging**: Log changes as they occur -- **Historical Analysis**: Analyze historical change patterns -- **Anomaly Detection**: Identify unusual or suspicious activity -- **Performance Monitoring**: Track system performance over time +- Create new customer records +- Generate support tickets +- Log user interactions +- Store processed data -### Flexible Architecture +### Data Updates -- **Database Storage**: Store change data in structured database format -- **REST API Access**: Access change data through auto-generated APIs -- **Flexible Schema**: Adapt to different monitoring needs -- **Integration Support**: Easy integration with existing systems +Allow agents to: -## Important Limitations +- Update customer preferences +- Modify order statuses +- Change product information +- Track interaction history -### What This Approach Does NOT Provide +### Complex Queries -- **Automated Alerts**: No automatic alert or notification system -- **Real-time Monitoring**: No real-time monitoring or alerting capabilities -- **Threshold Management**: No automatic threshold monitoring -- **Workflow Automation**: No automated response to changes +Support agents with: -### External Integration Required +- Multi-table joins +- Aggregation queries +- Time-based filtering +- Conditional logic -For complete monitoring solutions, you'll need: +## MCP Server Integration -- **Monitoring Tools**: Use external monitoring and alerting tools -- **Notification Systems**: Implement notification systems separately -- **Workflow Automation**: Use external workflow automation tools -- **Dashboard Tools**: Use external dashboard and visualization tools +Connect AI tools through MCP server: -## Best Practices +```python +# Example MCP server configuration for API access +mcp_config = { + "server_name": "gibsonai-api", + "base_url": "https://api.gibsonai.com/v1/-", + "authentication": { + "type": "bearer", + "token": "your_api_key" + }, + "capabilities": [ + "query_database", + "create_records", + "update_records", + "delete_records" + ] +} +``` -### Change Tracking Design +## Benefits for AI Agents -- **Comprehensive Logging**: Log all relevant changes and actions -- **Consistent Format**: Use consistent format for change records -- **Appropriate Detail**: Include appropriate level of detail in logs -- **Performance Consideration**: Consider performance impact of logging +- **Immediate Access**: APIs available instantly when schema is created +- **No Coding Required**: Auto-generated based on database schema +- **Natural Language**: Query database using natural language +- **Consistent Interface**: Standard REST API patterns +- **Error Handling**: Built-in error handling and validation +- **Scalable**: Handles high-volume agent requests +- **Secure**: Authentication and authorization built-in -### Data Management +## Getting Started -- **Retention Policies**: Implement appropriate data retention policies -- **Archive Strategy**: Archive old change data as needed -- **Data Quality**: Maintain high-quality change data -- **Privacy Considerations**: Consider privacy requirements for change data - -### Analysis and Reporting - -- **Regular Analysis**: Regularly analyze change data for insights -- **Trend Monitoring**: Monitor trends in data changes -- **Anomaly Detection**: Look for unusual patterns or anomalies -- **Actionable Insights**: Focus on actionable insights from change data - -## Getting Started - -1. **Design Change Schema**: Plan your change tracking database structure -2. **Create Database**: Use Gibson CLI to create change tracking schema -3. **Implement Tracking**: Add change tracking to your agent systems -4. **Analyze Data**: Use natural language queries to analyze change data -5. **Monitor and Improve**: Continuously monitor and improve your tracking - -## Gibson CLI Commands - -```bash -# Create change tracking schema -gibson modify table_name "description of change tracking table" -gibson code models -gibson merge +1. **Create Database Schema**: Use natural language to define your schema +2. **Generate Models**: Create Python models with Gibson CLI +3. **Deploy Schema**: Apply changes to get APIs +4. **Test Endpoints**: Use the auto-generated API endpoints +5. **Connect Agents**: Integrate agents with the APIs -# Generate models for change tracking -gibson code models -gibson code schemas -``` +## OpenAPI Documentation -## Sample Schema for Change Tracking +Each GibsonAI project provides: -```python -# Basic change tracking schema -change_tables = { - "data_changes": "Create data_changes table with id, table_name, record_id, change_type, old_value, new_value, changed_by, timestamp", - "agent_actions": "Create agent_actions table with id, agent_id, action_type, target_table, target_id, action_data, timestamp", - "system_events": "Create system_events table with id, event_type, event_data, source, severity, timestamp" -} -``` +- Complete OpenAPI specification +- Interactive API documentation +- Code examples in multiple languages +- Authentication details +- Error response formats -Ready to implement data change tracking for your AI agent systems? [Get started with GibsonAI](/get-started/signing-up). +Access your OpenAPI spec through the project settings in Gibson Studio. --- -title: Database queries for external dashboards and visualization tools -subtitle: Use GibsonAI's text-to-SQL capabilities to power external dashboards and visualization tools +title: Rapid database schema creation for AI agent workflows +subtitle: Create database schemas quickly for different AI agent actions using natural language enableTableOfContents: true updatedOn: '2025-01-08T00:00:00.000Z' --- -Use GibsonAI's text-to-SQL capabilities to power external dashboards and visualization tools. Create database schemas and query them with natural language to feed data into tools like Retool, Grafana, or custom dashboards. +Create database schemas quickly for different AI agent actions using natural language. Use GibsonAI's natural language database management to rapidly set up data structures for various agent workflows and use cases. + +## How it works + +GibsonAI enables rapid database schema creation using natural language descriptions. Instead of manually designing database schemas, you can describe what you need in plain English and have GibsonAI generate the appropriate database structure for your agent workflows. -MCP Integration +MCP Integration -Database & Queries +Database Management -CLI Tools +CLI Tools ## Key Features -### Natural Language to SQL +### Natural Language Schema Creation -- **Text-to-SQL Queries**: Convert natural language questions to SQL queries -- **Complex Queries**: Handle multi-table joins and aggregations -- **Safe Execution**: Protected query execution with built-in safeguards -- **Flexible Results**: Return results in formats suitable for dashboards +- **Instant Schema Generation**: Create database schemas from natural language descriptions +- **Table Definition**: Define tables with relationships using simple prompts +- **Data Type Selection**: Automatically choose appropriate data types +- **Index Creation**: Generate indexes for optimal performance -### Database Schema Management +### Rapid Development -- **Schema Creation**: Create database schemas using natural language -- **Table Management**: Add and modify tables with simple prompts -- **Relationship Building**: Define relationships between tables naturally -- **Data Type Handling**: Automatically select appropriate data types +- **Quick Prototyping**: Rapidly prototype database schemas for different use cases +- **Iterative Design**: Easily modify schemas as requirements evolve +- **Immediate APIs**: Get REST APIs instantly when schema is created +- **Model Generation**: Automatically generate Python models -### REST API Integration +### Flexible Schema Management -- **Auto-Generated APIs**: REST endpoints for all database tables -- **Query Endpoint**: Dedicated endpoint for natural language queries -- **JSON Responses**: Consistent JSON format for easy integration -- **Authentication**: Secure API access with authentication +- **Multiple Projects**: Create different database schemas for different agent workflows +- **Schema Evolution**: Easily modify existing schemas as needs change +- **Testing Environments**: Create test schemas for validation +- **Production Deployment**: Deploy schemas to production when ready + +## Use Cases + +### Agent Workflow Development + +Perfect for: + +- Creating database schemas for new agent workflows +- Prototyping data structures for different use cases +- Testing schema designs with sample data +- Validating data models before production + +### Rapid Prototyping + +Enable: + +- Quick database setup for proof-of-concept projects +- Testing different data models and relationships +- Validating agent data requirements +- Iterating on schema designs + +### Multi-Agent Systems + +Support: + +- Creating specialized databases for different agent types +- Isolating data for different agent workflows +- Managing complex multi-agent data relationships +- Coordinating data access across agent systems ## Implementation Examples -### Setting Up Database for Dashboard Data +### Creating Schema for Customer Service Agent ```python -# Using Gibson CLI to create database schema for dashboard data -# Create analytics tables -# gibson modify page_views "Create page_views table with id, page_url, user_id, timestamp, session_id" -# gibson modify user_sessions "Create user_sessions table with id, user_id, start_time, end_time, device_type" -# gibson modify conversion_events "Create conversion_events table with id, user_id, event_type, value, timestamp" -# gibson modify user_metrics "Create user_metrics table with user_id, metric_name, value, date" +# Using Gibson CLI to create customer service database schema +# Create customer service tables with natural language +# gibson modify customers "Create customers table with id, name, email, phone, account_status, created_at" +# gibson modify support_tickets "Create support_tickets table with id, customer_id, issue_type, description, priority, status, created_at, resolved_at" +# gibson modify agent_responses "Create agent_responses table with id, ticket_id, agent_id, response_text, timestamp, satisfaction_score" +# gibson modify escalations "Create escalations table with id, ticket_id, escalated_to, reason, escalated_at" -# Generate models and apply changes +# Generate models and deploy # gibson code models # gibson merge ``` -### Querying Data for Dashboard Widgets +### Creating Schema for E-commerce Agent ```python -import requests +# Create e-commerce database schema quickly +# gibson modify products "Create products table with id, name, description, price, category, stock_quantity, active" +# gibson modify orders "Create orders table with id, customer_id, total_amount, status, payment_method, created_at" +# gibson modify order_items "Create order_items table with id, order_id, product_id, quantity, unit_price" +# gibson modify shopping_carts "Create shopping_carts table with id, customer_id, product_id, quantity, added_at" +# gibson modify reviews "Create reviews table with id, product_id, customer_id, rating, comment, created_at" -class DashboardDataProvider: - def __init__(self, api_key): - self.api_key = api_key - self.base_url = "https://api.gibsonai.com/v1/-" - self.headers = {"Authorization": f"Bearer {api_key}"} +# Generate models and deploy +# gibson code models +# gibson merge +``` - def get_daily_active_users(self): - """Get daily active users for dashboard""" - query_request = { - "query": "Count unique users by date for the last 30 days" - } +### Creating Schema for Analytics Agent - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +```python +# Create analytics database schema +# gibson modify user_events "Create user_events table with id, user_id, event_type, event_data, timestamp, session_id" +# gibson modify page_views "Create page_views table with id, user_id, page_url, referrer, timestamp, duration" +# gibson modify conversions "Create conversions table with id, user_id, conversion_type, value, timestamp" +# gibson modify user_segments "Create user_segments table with id, user_id, segment_name, segment_value, calculated_at" - if response.status_code == 200: - return response.json() - else: - print(f"Query failed: {response.status_code}") - return None +# Generate models and deploy +# gibson code models +# gibson merge +``` - def get_page_view_stats(self): - """Get page view statistics""" - query_request = { - "query": "Show page views by URL for the last 7 days with total counts" - } +### Quick Schema Creation Framework - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +```python +import subprocess +import json - if response.status_code == 200: - return response.json() - else: - print(f"Query failed: {response.status_code}") - return None +class QuickSchemaCreator: + def __init__(self): + self.schemas = {} - def get_conversion_funnel(self): - """Get conversion funnel data""" - query_request = { - "query": "Calculate conversion funnel from page views to conversions by event type" + def create_schema_for_action(self, action_name, table_descriptions): + """Create database schema for specific agent action""" + print(f"Creating schema for action: {action_name}") + + # Store schema description + self.schemas[action_name] = { + "tables": table_descriptions, + "created_at": datetime.now().isoformat() } - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) + # Generate Gibson CLI commands + for table_name, description in table_descriptions.items(): + command = f'gibson modify {table_name} "{description}"' + print(f"Executing: {command}") - if response.status_code == 200: - return response.json() - else: - print(f"Query failed: {response.status_code}") - return None + # Note: In real implementation, you would execute the command + # result = subprocess.run(command, shell=True, capture_output=True, text=True) + # if result.returncode != 0: + # print(f"Error creating table {table_name}: {result.stderr}") + # return False - def get_user_engagement_metrics(self): - """Get user engagement metrics""" - query_request = { - "query": "Calculate average session duration and pages per session by device type" - } + # Generate models + print("Generating models...") + # subprocess.run("gibson code models", shell=True) - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) + # Deploy schema + print("Deploying schema...") + # subprocess.run("gibson merge", shell=True) - if response.status_code == 200: - return response.json() - else: - print(f"Query failed: {response.status_code}") - return None -``` + print(f"Schema for {action_name} created successfully!") + return True -### Integrating with External Dashboard Tools + def create_customer_service_schema(self): + """Create schema for customer service agent""" + table_descriptions = { + "customers": "Create customers table with id, name, email, phone, account_status, created_at", + "support_tickets": "Create support_tickets table with id, customer_id, issue_type, description, priority, status, created_at, resolved_at", + "agent_responses": "Create agent_responses table with id, ticket_id, agent_id, response_text, timestamp, satisfaction_score", + "escalations": "Create escalations table with id, ticket_id, escalated_to, reason, escalated_at" + } -```python -# Example integration with Retool -class RetoolIntegration: - def __init__(self, gibson_api_key): - self.dashboard_provider = DashboardDataProvider(gibson_api_key) + return self.create_schema_for_action("customer_service", table_descriptions) - def get_dashboard_data(self, widget_type): - """Get data for specific dashboard widget""" - if widget_type == "daily_active_users": - return self.dashboard_provider.get_daily_active_users() - elif widget_type == "page_views": - return self.dashboard_provider.get_page_view_stats() - elif widget_type == "conversion_funnel": - return self.dashboard_provider.get_conversion_funnel() - elif widget_type == "engagement_metrics": - return self.dashboard_provider.get_user_engagement_metrics() - else: - return None + def create_ecommerce_schema(self): + """Create schema for e-commerce agent""" + table_descriptions = { + "products": "Create products table with id, name, description, price, category, stock_quantity, active", + "orders": "Create orders table with id, customer_id, total_amount, status, payment_method, created_at", + "order_items": "Create order_items table with id, order_id, product_id, quantity, unit_price", + "shopping_carts": "Create shopping_carts table with id, customer_id, product_id, quantity, added_at", + "reviews": "Create reviews table with id, product_id, customer_id, rating, comment, created_at" + } -# Example integration with Grafana -class GrafanaIntegration: - def __init__(self, gibson_api_key): - self.dashboard_provider = DashboardDataProvider(gibson_api_key) + return self.create_schema_for_action("ecommerce", table_descriptions) - def get_time_series_data(self, metric_name, time_range): - """Get time series data for Grafana""" - query_request = { - "query": f"Get {metric_name} values over time for the last {time_range}" + def create_analytics_schema(self): + """Create schema for analytics agent""" + table_descriptions = { + "user_events": "Create user_events table with id, user_id, event_type, event_data, timestamp, session_id", + "page_views": "Create page_views table with id, user_id, page_url, referrer, timestamp, duration", + "conversions": "Create conversions table with id, user_id, conversion_type, value, timestamp", + "user_segments": "Create user_segments table with id, user_id, segment_name, segment_value, calculated_at" } - response = requests.post( - f"{self.dashboard_provider.base_url}/query", - json=query_request, - headers=self.dashboard_provider.headers - ) - - if response.status_code == 200: - # Format data for Grafana - data = response.json() - return self.format_for_grafana(data) - else: - return None + return self.create_schema_for_action("analytics", table_descriptions) - def format_for_grafana(self, data): - """Format data for Grafana consumption""" - # Convert to Grafana time series format - return { - "target": "metric_name", - "datapoints": [[value, timestamp] for value, timestamp in data] + def create_content_management_schema(self): + """Create schema for content management agent""" + table_descriptions = { + "articles": "Create articles table with id, title, content, author_id, category, status, created_at, updated_at", + "comments": "Create comments table with id, article_id, user_id, comment_text, created_at, approved", + "categories": "Create categories table with id, name, description, parent_id", + "tags": "Create tags table with id, name, description", + "article_tags": "Create article_tags table with id, article_id, tag_id" } + + return self.create_schema_for_action("content_management", table_descriptions) ``` -### Creating Custom Dashboard API +### Testing Schema with Sample Data ```python -from flask import Flask, jsonify, request import requests -app = Flask(__name__) +class SchemaValidator: + def __init__(self, api_key): + self.api_key = api_key + self.base_url = "https://api.gibsonai.com/v1/-" + self.headers = {"Authorization": f"Bearer {api_key}"} -class CustomDashboardAPI: - def __init__(self, gibson_api_key): - self.gibson_api_key = gibson_api_key + def validate_schema_with_sample_data(self, schema_name, sample_data): + """Validate schema by inserting sample data""" + print(f"Validating schema: {schema_name}") + + success_count = 0 + error_count = 0 + + for table_name, records in sample_data.items(): + print(f"Testing table: {table_name}") + + for record in records: + try: + response = requests.post( + f"{self.base_url}/{table_name}", + json=record, + headers=self.headers + ) + + if response.status_code == 201: + success_count += 1 + print(f" ✓ Record inserted successfully") + else: + error_count += 1 + print(f" ✗ Failed to insert record: {response.status_code}") + + except Exception as e: + error_count += 1 + print(f" ✗ Error inserting record: {e}") + + print(f"\nValidation complete: {success_count} successful, {error_count} errors") + return error_count == 0 + + def test_customer_service_schema(self): + """Test customer service schema with sample data""" + sample_data = { + "customers": [ + {"name": "John Doe", "email": "john@example.com", "phone": "555-0123", "account_status": "active"}, + {"name": "Jane Smith", "email": "jane@example.com", "phone": "555-0456", "account_status": "active"} + ], + "support_tickets": [ + {"customer_id": 1, "issue_type": "billing", "description": "Question about billing", "priority": "medium", "status": "open"}, + {"customer_id": 2, "issue_type": "technical", "description": "Login issues", "priority": "high", "status": "open"} + ] + } + + return self.validate_schema_with_sample_data("customer_service", sample_data) + + def test_ecommerce_schema(self): + """Test e-commerce schema with sample data""" + sample_data = { + "products": [ + {"name": "Laptop", "description": "High-performance laptop", "price": 999.99, "category": "Electronics", "stock_quantity": 50, "active": True}, + {"name": "Mouse", "description": "Wireless mouse", "price": 29.99, "category": "Electronics", "stock_quantity": 200, "active": True} + ], + "orders": [ + {"customer_id": 1, "total_amount": 1029.98, "status": "completed", "payment_method": "credit_card"}, + {"customer_id": 2, "total_amount": 29.99, "status": "processing", "payment_method": "paypal"} + ] + } + + return self.validate_schema_with_sample_data("ecommerce", sample_data) +``` + +### Querying Created Schemas + +```python +class SchemaQuerier: + def __init__(self, api_key): + self.api_key = api_key self.base_url = "https://api.gibsonai.com/v1/-" - self.headers = {"Authorization": f"Bearer {gibson_api_key}"} + self.headers = {"Authorization": f"Bearer {api_key}"} - def execute_query(self, query): - """Execute natural language query""" - query_request = {"query": query} + def query_schema_data(self, query_description): + """Query schema data using natural language""" + query_request = { + "query": query_description + } response = requests.post( f"{self.base_url}/query", @@ -5583,554 +5417,806 @@ class CustomDashboardAPI: ) if response.status_code == 200: - return response.json() + results = response.json() + print(f"Query results: {len(results)} records found") + return results else: + print(f"Query failed: {response.status_code}") return None -dashboard_api = CustomDashboardAPI("your_gibson_api_key") - -@app.route('/api/dashboard/users/daily') -def daily_users(): - """Get daily user metrics""" - data = dashboard_api.execute_query("Count unique users by date for the last 30 days") - return jsonify(data) - -@app.route('/api/dashboard/conversions') -def conversions(): - """Get conversion data""" - data = dashboard_api.execute_query("Show conversion events by type for the last 7 days") - return jsonify(data) + def get_customer_service_metrics(self): + """Get customer service metrics""" + return self.query_schema_data( + "Show ticket count by status and average response time for the last 30 days" + ) -@app.route('/api/dashboard/query') -def custom_query(): - """Execute custom query from dashboard""" - query = request.args.get('q') - if not query: - return jsonify({"error": "Query parameter 'q' is required"}), 400 + def get_ecommerce_analytics(self): + """Get e-commerce analytics""" + return self.query_schema_data( + "Show total sales, order count, and top-selling products for the last week" + ) - data = dashboard_api.execute_query(query) - if data: - return jsonify(data) - else: - return jsonify({"error": "Query failed"}), 500 + def get_user_engagement_data(self): + """Get user engagement data""" + return self.query_schema_data( + "Calculate user engagement metrics including page views, session duration, and conversion rates" + ) ``` -## Dashboard Integration Examples +## Common Schema Patterns -### Business Intelligence Dashboard +### User Management Schema ```python -# Create schema for business intelligence -# gibson modify sales_data "Create sales_data table with id, product_id, sales_amount, sales_date, region" -# gibson modify product_catalog "Create product_catalog table with id, name, category, price, cost" -# gibson modify customer_segments "Create customer_segments table with customer_id, segment, value_score" -# gibson code models -# gibson merge - -def get_business_intelligence_data(): - """Get data for business intelligence dashboard""" - - # Revenue by region - revenue_by_region = { - "query": "Calculate total revenue by region for the last quarter" +def create_user_management_schema(): + """Create user management schema""" + table_descriptions = { + "users": "Create users table with id, username, email, password_hash, first_name, last_name, created_at, last_login", + "user_roles": "Create user_roles table with id, user_id, role_name, granted_at", + "user_sessions": "Create user_sessions table with id, user_id, session_token, expires_at, created_at", + "user_preferences": "Create user_preferences table with id, user_id, preference_key, preference_value" } - # Top performing products - top_products = { - "query": "Show top 10 products by sales volume for the last month" - } + creator = QuickSchemaCreator() + return creator.create_schema_for_action("user_management", table_descriptions) +``` - # Customer segment analysis - customer_analysis = { - "query": "Analyze customer segments by average order value and frequency" - } +### Inventory Management Schema - return { - "revenue_by_region": revenue_by_region, - "top_products": top_products, - "customer_analysis": customer_analysis +```python +def create_inventory_schema(): + """Create inventory management schema""" + table_descriptions = { + "inventory_items": "Create inventory_items table with id, product_id, location, quantity, reserved_quantity, last_updated", + "stock_movements": "Create stock_movements table with id, product_id, movement_type, quantity, location, timestamp, reference", + "suppliers": "Create suppliers table with id, name, contact_email, contact_phone, address", + "purchase_orders": "Create purchase_orders table with id, supplier_id, order_date, status, total_amount" } + + creator = QuickSchemaCreator() + return creator.create_schema_for_action("inventory_management", table_descriptions) ``` -### Operational Dashboard +### Communication Schema ```python -# Create schema for operational metrics -# gibson modify system_metrics "Create system_metrics table with id, metric_name, value, timestamp, server_id" -# gibson modify error_logs "Create error_logs table with id, error_type, message, timestamp, severity" -# gibson modify user_activity "Create user_activity table with id, user_id, action, timestamp, success" -# gibson code models -# gibson merge - -def get_operational_dashboard_data(): - """Get data for operational dashboard""" - - # System performance metrics - system_performance = { - "query": "Show average response time and error rate for the last hour" - } - - # Error analysis - error_analysis = { - "query": "Count errors by type and severity for the last 24 hours" - } - - # User activity patterns - user_activity = { - "query": "Analyze user activity patterns and success rates" +def create_communication_schema(): + """Create communication schema""" + table_descriptions = { + "messages": "Create messages table with id, sender_id, recipient_id, subject, content, sent_at, read_at", + "notifications": "Create notifications table with id, user_id, notification_type, title, message, read, created_at", + "communication_logs": "Create communication_logs table with id, user_id, channel, message, timestamp, status" } - return { - "system_performance": system_performance, - "error_analysis": error_analysis, - "user_activity": user_activity - } + creator = QuickSchemaCreator() + return creator.create_schema_for_action("communication", table_descriptions) ``` -## Use Cases - -### Analytics Dashboards - -Perfect for: - -- Web analytics and user behavior tracking -- E-commerce performance monitoring -- Marketing campaign effectiveness -- User engagement and retention metrics - -### Business Intelligence - -Enable: - -- Sales performance tracking -- Revenue analysis by segments -- Product performance metrics -- Customer lifetime value analysis - -### Operational Monitoring - -Support: - -- System performance metrics -- Error tracking and analysis -- User activity monitoring -- Application health dashboards - -### Custom Visualizations - -Allow: - -- Custom chart creation based on specific queries -- Real-time data visualization -- Interactive dashboard elements -- Dynamic filtering and drill-down capabilities - -## Benefits for Dashboard Development +## Benefits for Agent Development -### Rapid Data Access +### Rapid Development -- **Natural Language**: Query data using natural language instead of complex SQL -- **Instant APIs**: Auto-generated REST APIs for immediate data access -- **Flexible Queries**: Handle complex analytical queries easily -- **Real-time Data**: Access current data for live dashboards +- **Instant Schema Creation**: Create database schemas in minutes, not hours +- **Natural Language Interface**: Use plain English to describe data structures +- **Immediate APIs**: Get REST APIs as soon as schema is created +- **Quick Iteration**: Easily modify schemas as requirements change -### Easy Integration +### Flexible Architecture -- **Standard APIs**: REST APIs work with any dashboard tool -- **JSON Format**: Consistent JSON responses for easy parsing -- **Authentication**: Secure API access with built-in authentication -- **Documentation**: Auto-generated API documentation +- **Multiple Schemas**: Create different schemas for different agent workflows +- **Easy Modification**: Modify existing schemas with simple prompts +- **Test Environments**: Create test schemas for validation +- **Production Deployment**: Deploy schemas when ready -### Scalable Architecture +### Integrated Development -- **Database Management**: Easily modify schema as dashboard needs evolve -- **Performance**: Optimized queries for dashboard performance -- **Security**: Secure data access with proper authentication -- **Reliability**: Robust database infrastructure +- **Model Generation**: Automatically generate Python models +- **API Documentation**: Get OpenAPI documentation automatically +- **MCP Integration**: Connect AI tools through MCP server +- **Text-to-SQL**: Query data using natural language ## Best Practices -### Query Optimization +### Schema Design -- **Specific Queries**: Use specific natural language queries for better performance -- **Time Ranges**: Include appropriate time ranges in queries -- **Indexing**: Ensure proper indexing for frequently queried data -- **Caching**: Implement caching for frequently accessed dashboard data +- **Clear Descriptions**: Use clear, descriptive natural language +- **Appropriate Relationships**: Define relationships between tables +- **Data Types**: Let GibsonAI choose appropriate data types +- **Indexing**: Consider performance when designing schemas -### Dashboard Design +### Development Workflow -- **Clear Metrics**: Choose clear and meaningful metrics for visualization -- **Appropriate Visualizations**: Select appropriate chart types for data -- **User Experience**: Design intuitive and responsive dashboards -- **Performance**: Optimize dashboard loading times +- **Start Simple**: Begin with simple schemas and evolve +- **Test Early**: Test schemas with sample data +- **Iterate**: Modify schemas based on testing results +- **Document**: Document schema decisions and changes -### Data Management +### Production Deployment -- **Data Quality**: Ensure high-quality data for accurate dashboards -- **Regular Updates**: Keep dashboard data current and relevant -- **Backup Strategy**: Implement proper data backup and recovery -- **Monitoring**: Monitor dashboard performance and usage +- **Validation**: Validate schemas thoroughly before production +- **Backup**: Ensure proper backup and recovery procedures +- **Monitoring**: Monitor schema performance in production +- **Maintenance**: Plan for ongoing schema maintenance ## Getting Started -1. **Create Database Schema**: Define your data structure using natural language -2. **Generate Models**: Create Python models with Gibson CLI -3. **Populate Data**: Add sample or real data to your database -4. **Test Queries**: Validate your natural language queries -5. **Integrate with Dashboard Tool**: Connect your preferred dashboard tool to GibsonAI APIs +1. **Identify Use Case**: Determine what type of agent workflow you need +2. **Describe Schema**: Use natural language to describe your data structure +3. **Create Schema**: Use Gibson CLI to create the database schema +4. **Test Schema**: Validate with sample data +5. **Deploy**: Deploy to production when ready ## Gibson CLI Commands ```bash -# Create database schema for dashboard data -gibson modify table_name "description of table structure" +# Create schema quickly +gibson modify table_name "natural language description" gibson code models gibson merge -# Generate models for dashboard integration +# Test schema changes gibson code models -gibson code schemas -``` - -## Supported Dashboard Tools +# (test with sample data) +gibson merge -- **Retool**: Low-code dashboard builder -- **Grafana**: Time series visualization -- **Tableau**: Business intelligence platform -- **Power BI**: Microsoft's business analytics tool -- **Custom Dashboards**: Build your own using web frameworks +# Reset if needed +gibson forget last +``` -Ready to create database-powered dashboards? [Get started with GibsonAI](/get-started/signing-up). +Ready to create database schemas quickly for your AI agent workflows? [Get started with GibsonAI](/get-started/signing-up). --- -title: Connect AI tools to GibsonAI through MCP server -subtitle: Use the GibsonAI MCP server to connect AI tools and agents to your databases +title: Tracking data changes for AI agent monitoring +subtitle: Use database operations to track and monitor data changes in AI agent workflows enableTableOfContents: true updatedOn: '2025-01-08T00:00:00.000Z' --- -Use the GibsonAI MCP server to connect AI tools and agents to your databases. The Model Context Protocol (MCP) server provides a standardized way for AI tools to interact with your GibsonAI databases using natural language. - -## How it works - -The GibsonAI MCP server allows AI tools like GitHub Copilot, Cursor, Claude, and other AI assistants to interact with your databases through natural language. It provides secure, contextual access to your data and schema management capabilities. +Use database operations to track and monitor data changes in AI agent workflows. Create database schemas to log changes, track agent actions, and analyze data patterns using natural language queries. -MCP Server - -MCP Client Connection +MCP Integration Database Management +CLI Tools + ## Key Features -### Natural Language Database Operations +### Change Tracking Database -- **Schema Management**: Create and modify database schemas using natural language -- **Data Querying**: Execute text-to-SQL queries through AI tools -- **Table Operations**: Add, modify, and remove tables with simple prompts -- **Relationship Building**: Define relationships between tables naturally +- **Data Change Logs**: Track all data changes with timestamps and context +- **Agent Action Logging**: Log agent actions and their data impacts +- **Version History**: Maintain version history of data changes +- **Audit Trail**: Complete audit trail of all data modifications -### AI Tool Integration +### Natural Language Monitoring -- **GitHub Copilot**: Use with VS Code and GitHub Copilot -- **Cursor**: Integrate with Cursor AI editor -- **Claude**: Connect with Claude AI assistant -- **Custom Tools**: Connect any MCP-compatible AI tool +- **Query Change Data**: Use natural language to query change logs +- **Pattern Analysis**: Analyze data change patterns and trends +- **Impact Assessment**: Assess the impact of data changes on system behavior +- **Reporting**: Generate reports on data changes and agent activity -### Secure Database Access +### Database Operations -- **Authentication**: Secure API key-based authentication -- **Project Context**: AI tools understand your specific database schema -- **Safe Operations**: Built-in protections for data integrity -- **Scoped Access**: Access limited to your specific GibsonAI project +- **Schema Management**: Create schemas to track various types of changes +- **REST API Access**: Access change data through auto-generated APIs +- **Real-time Logging**: Log changes as they occur in real-time +- **Flexible Queries**: Query change data using natural language ## Implementation Examples -### Setting up MCP Server - -```json -{ - "mcpServers": { - "gibsonai": { - "command": "npx", - "args": ["@gibsonai/mcp-server"], - "env": { - "GIBSON_API_KEY": "your_api_key_here", - "GIBSON_PROJECT_ID": "your_project_id" - } - } - } -} -``` - -### Using with GitHub Copilot +### Creating Change Tracking Schema ```python -# Example: Creating a database schema through GitHub Copilot -# Prompt: "Create a user management system with users, roles, and permissions" - -# The MCP server will help GitHub Copilot understand your request and: -# 1. Create the database schema -# 2. Generate the appropriate tables -# 3. Set up relationships between tables -# 4. Generate Python models +# Using Gibson CLI to create change tracking database +# Create tables for tracking data changes +# gibson modify data_changes "Create data_changes table with id, table_name, record_id, change_type, old_value, new_value, changed_by, timestamp" +# gibson modify agent_actions "Create agent_actions table with id, agent_id, action_type, target_table, target_id, action_data, timestamp" +# gibson modify system_events "Create system_events table with id, event_type, event_data, source, severity, timestamp" +# gibson modify change_summaries "Create change_summaries table with id, date, table_name, change_count, agent_id, summary" -# gibson modify users "Create a users table with id, username, email, password_hash, created_at" -# gibson modify roles "Create a roles table with id, name, description, permissions" -# gibson modify user_roles "Create a user_roles table to link users and roles" +# Generate models and deploy # gibson code models # gibson merge ``` -### Natural Language Database Queries +### Change Tracking System ```python -# Example queries through MCP server -# AI tool prompts that the MCP server can handle: +import requests +from datetime import datetime +import json -# "Show me all users who registered in the last 30 days" -# "Create a new user with email john@example.com" -# "Update the user table to add a last_login column" -# "Find all orders with status 'pending' and total > 100" +class ChangeTracker: + def __init__(self, api_key): + self.api_key = api_key + self.base_url = "https://api.gibsonai.com/v1/-" + self.headers = {"Authorization": f"Bearer {api_key}"} -# The MCP server translates these to appropriate Gibson commands or API calls -``` + def log_data_change(self, table_name, record_id, change_type, old_value, new_value, changed_by): + """Log a data change""" + change_data = { + "table_name": table_name, + "record_id": record_id, + "change_type": change_type, + "old_value": json.dumps(old_value) if old_value else None, + "new_value": json.dumps(new_value) if new_value else None, + "changed_by": changed_by, + "timestamp": datetime.now().isoformat() + } -### Schema Evolution Example + response = requests.post( + f"{self.base_url}/data-changes", + json=change_data, + headers=self.headers + ) -```python -# Using AI tools to evolve your database schema + if response.status_code == 201: + print(f"Data change logged: {change_type} on {table_name}") + return response.json() + else: + print(f"Failed to log data change: {response.status_code}") + return None -# Prompt: "Add a subscription feature to the user system" -# MCP server helps generate: + def log_agent_action(self, agent_id, action_type, target_table, target_id, action_data): + """Log an agent action""" + action_data_record = { + "agent_id": agent_id, + "action_type": action_type, + "target_table": target_table, + "target_id": target_id, + "action_data": json.dumps(action_data), + "timestamp": datetime.now().isoformat() + } -# gibson modify subscriptions "Create a subscriptions table with id, user_id, plan_type, status, start_date, end_date" -# gibson modify users "Add subscription_id column to users table" -# gibson code models -# gibson merge -``` + response = requests.post( + f"{self.base_url}/agent-actions", + json=action_data_record, + headers=self.headers + ) -## AI Tool Capabilities + if response.status_code == 201: + print(f"Agent action logged: {action_type} by {agent_id}") + return response.json() + else: + print(f"Failed to log agent action: {response.status_code}") + return None -### Schema Creation + def log_system_event(self, event_type, event_data, source, severity="info"): + """Log a system event""" + event_record = { + "event_type": event_type, + "event_data": json.dumps(event_data), + "source": source, + "severity": severity, + "timestamp": datetime.now().isoformat() + } -AI tools can help you: + response = requests.post( + f"{self.base_url}/system-events", + json=event_record, + headers=self.headers + ) -- Design database schemas from natural language descriptions -- Create tables with appropriate data types -- Define relationships between tables -- Generate indexes and constraints + if response.status_code == 201: + print(f"System event logged: {event_type}") + return response.json() + else: + print(f"Failed to log system event: {response.status_code}") + return None -### Data Operations + def create_change_summary(self, date, table_name, change_count, agent_id, summary): + """Create a change summary""" + summary_data = { + "date": date, + "table_name": table_name, + "change_count": change_count, + "agent_id": agent_id, + "summary": summary + } -Enable AI tools to: + response = requests.post( + f"{self.base_url}/change-summaries", + json=summary_data, + headers=self.headers + ) -- Query data using natural language -- Insert new records -- Update existing data -- Generate reports and analytics + if response.status_code == 201: + print(f"Change summary created for {date}") + return response.json() + else: + print(f"Failed to create change summary: {response.status_code}") + return None +``` -### Model Generation +### Data Change Monitoring -Automatically generate: +```python +class DataChangeMonitor: + def __init__(self, api_key): + self.api_key = api_key + self.base_url = "https://api.gibsonai.com/v1/-" + self.headers = {"Authorization": f"Bearer {api_key}"} -- SQLAlchemy models -- Pydantic schemas -- Database migration scripts -- API documentation + def monitor_recent_changes(self, hours=24): + """Monitor recent data changes""" + query_request = { + "query": f"Show all data changes from the last {hours} hours grouped by table and change type" + } -## MCP Server Commands + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) -### Database Schema Operations + if response.status_code == 200: + results = response.json() + print(f"Recent changes in last {hours} hours:") + for result in results: + print(f" {result}") + return results + else: + print(f"Query failed: {response.status_code}") + return None -```bash -# Commands the MCP server can execute: + def analyze_change_patterns(self): + """Analyze data change patterns""" + query_request = { + "query": "Analyze data change patterns by agent, table, and time of day for the last 7 days" + } -# Create/modify tables -gibson modify table_name "description of changes" + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) -# Generate models -gibson code models -gibson code schemas + if response.status_code == 200: + results = response.json() + print("Change pattern analysis:") + for result in results: + print(f" {result}") + return results + else: + print(f"Query failed: {response.status_code}") + return None -# Apply changes -gibson merge + def detect_unusual_activity(self): + """Detect unusual data change activity""" + query_request = { + "query": "Find unusual data change activity including high-frequency changes, bulk operations, and changes outside normal hours" + } -# Build database -gibson build datastore -``` + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) -### Query Operations + if response.status_code == 200: + results = response.json() + print("Unusual activity detected:") + for result in results: + print(f" {result}") + return results + else: + print(f"Query failed: {response.status_code}") + return None -```python -# Text-to-SQL through MCP server -# AI tools can generate queries like: + def get_agent_activity_summary(self, agent_id): + """Get activity summary for specific agent""" + query_request = { + "query": f"Show activity summary for agent {agent_id} including actions performed, data changes, and time patterns" + } -import requests + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) -response = requests.post( - "https://api.gibsonai.com/v1/-/query", - json={"query": "Show me all active users with their last login"}, - headers={"Authorization": "Bearer your_api_key"} -) + if response.status_code == 200: + results = response.json() + print(f"Activity summary for agent {agent_id}:") + for result in results: + print(f" {result}") + return results + else: + print(f"Query failed: {response.status_code}") + return None ``` -## Integration Examples +### Agent Integration with Change Tracking -### VS Code with GitHub Copilot +```python +class MonitoredAgent: + def __init__(self, agent_id, api_key): + self.agent_id = agent_id + self.api_key = api_key + self.base_url = "https://api.gibsonai.com/v1/-" + self.headers = {"Authorization": f"Bearer {api_key}"} + self.change_tracker = ChangeTracker(api_key) -```json -// settings.json configuration for MCP server -{ - "github.copilot.enable": { - "*": true, - "plaintext": true, - "markdown": true - }, - "github.copilot.advanced": { - "listCount": 10, - "inlineSuggestCount": 3 - } -} -``` + def create_record(self, table_name, record_data): + """Create a record with change tracking""" + # Create the record + response = requests.post( + f"{self.base_url}/{table_name}", + json=record_data, + headers=self.headers + ) -### Cursor AI Integration + if response.status_code == 201: + created_record = response.json() -```python -# Example workflow in Cursor -# 1. Describe your database needs in natural language -# 2. Cursor uses MCP server to understand GibsonAI capabilities -# 3. Generates appropriate Gibson CLI commands -# 4. Creates Python models and API code + # Log the change + self.change_tracker.log_data_change( + table_name=table_name, + record_id=created_record["id"], + change_type="CREATE", + old_value=None, + new_value=record_data, + changed_by=self.agent_id + ) -# "I need a blog system with posts, authors, and comments" -# Cursor + MCP server generates: -# - Database schema -# - SQLAlchemy models -# - REST API endpoints -# - Sample queries + # Log the agent action + self.change_tracker.log_agent_action( + agent_id=self.agent_id, + action_type="CREATE_RECORD", + target_table=table_name, + target_id=created_record["id"], + action_data=record_data + ) + + print(f"Record created and logged: {table_name}") + return created_record + else: + print(f"Failed to create record: {response.status_code}") + return None + + def update_record(self, table_name, record_id, update_data): + """Update a record with change tracking""" + # Get current record for old_value + current_response = requests.get( + f"{self.base_url}/{table_name}/{record_id}", + headers=self.headers + ) + + old_value = current_response.json() if current_response.status_code == 200 else None + + # Update the record + response = requests.put( + f"{self.base_url}/{table_name}/{record_id}", + json=update_data, + headers=self.headers + ) + + if response.status_code == 200: + updated_record = response.json() + + # Log the change + self.change_tracker.log_data_change( + table_name=table_name, + record_id=record_id, + change_type="UPDATE", + old_value=old_value, + new_value=update_data, + changed_by=self.agent_id + ) + + # Log the agent action + self.change_tracker.log_agent_action( + agent_id=self.agent_id, + action_type="UPDATE_RECORD", + target_table=table_name, + target_id=record_id, + action_data=update_data + ) + + print(f"Record updated and logged: {table_name}/{record_id}") + return updated_record + else: + print(f"Failed to update record: {response.status_code}") + return None + + def delete_record(self, table_name, record_id): + """Delete a record with change tracking""" + # Get current record for old_value + current_response = requests.get( + f"{self.base_url}/{table_name}/{record_id}", + headers=self.headers + ) + + old_value = current_response.json() if current_response.status_code == 200 else None + + # Delete the record + response = requests.delete( + f"{self.base_url}/{table_name}/{record_id}", + headers=self.headers + ) + + if response.status_code == 200: + # Log the change + self.change_tracker.log_data_change( + table_name=table_name, + record_id=record_id, + change_type="DELETE", + old_value=old_value, + new_value=None, + changed_by=self.agent_id + ) + + # Log the agent action + self.change_tracker.log_agent_action( + agent_id=self.agent_id, + action_type="DELETE_RECORD", + target_table=table_name, + target_id=record_id, + action_data={"deleted_record": old_value} + ) + + print(f"Record deleted and logged: {table_name}/{record_id}") + return True + else: + print(f"Failed to delete record: {response.status_code}") + return False ``` -### Claude AI Integration +### Change Analysis and Reporting ```python -# Example conversation with Claude using MCP server -# User: "Help me create a customer management system" -# Claude: "I'll help you create a customer management system using GibsonAI" +class ChangeAnalyzer: + def __init__(self, api_key): + self.api_key = api_key + self.base_url = "https://api.gibsonai.com/v1/-" + self.headers = {"Authorization": f"Bearer {api_key}"} -# Claude generates Gibson commands: -# gibson modify customers "Create customers table with name, email, phone, address" -# gibson modify orders "Create orders table with customer_id, total, status, created_at" -# gibson modify order_items "Create order_items table with order_id, product_id, quantity, price" -# gibson code models -# gibson merge + def generate_daily_report(self, date): + """Generate daily change report""" + query_request = { + "query": f"Generate a comprehensive report of all data changes on {date} including counts by table, agent, and change type" + } + + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) + + if response.status_code == 200: + results = response.json() + print(f"Daily report for {date}:") + for result in results: + print(f" {result}") + return results + else: + print(f"Query failed: {response.status_code}") + return None + + def analyze_agent_impact(self, agent_id, days=7): + """Analyze the impact of a specific agent""" + query_request = { + "query": f"Analyze the impact of agent {agent_id} over the last {days} days including records created, updated, deleted, and affected tables" + } + + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) + + if response.status_code == 200: + results = response.json() + print(f"Agent {agent_id} impact analysis:") + for result in results: + print(f" {result}") + return results + else: + print(f"Query failed: {response.status_code}") + return None + + def find_data_anomalies(self): + """Find data anomalies and unusual patterns""" + query_request = { + "query": "Find data anomalies including unusual change volumes, unexpected change types, and irregular timing patterns" + } + + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) + + if response.status_code == 200: + results = response.json() + print("Data anomalies found:") + for result in results: + print(f" {result}") + return results + else: + print(f"Query failed: {response.status_code}") + return None + + def track_data_evolution(self, table_name, record_id): + """Track the evolution of a specific record""" + query_request = { + "query": f"Show the complete change history for record {record_id} in table {table_name} ordered by timestamp" + } + + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) + + if response.status_code == 200: + results = response.json() + print(f"Change history for {table_name}/{record_id}:") + for result in results: + print(f" {result}") + return results + else: + print(f"Query failed: {response.status_code}") + return None ``` ## Use Cases -### Rapid Prototyping +### Agent Monitoring Perfect for: -- Quickly creating database schemas for new projects -- Testing different data models -- Generating sample data and queries -- Validating database designs +- Tracking agent actions and their data impacts +- Monitoring agent performance and behavior +- Auditing agent operations for compliance +- Identifying agent-related issues or problems -### AI-Assisted Development +### Data Governance Enable: -- Natural language database operations -- Automated model generation -- Schema evolution guidance -- Query optimization suggestions +- Maintaining complete audit trails of data changes +- Tracking data lineage and transformation +- Ensuring data quality and consistency +- Supporting compliance requirements -### Team Collaboration +### System Analysis Support: -- Shared database understanding through AI tools -- Consistent schema management -- Automated documentation generation -- Knowledge transfer between team members +- Analyzing data change patterns and trends +- Identifying system performance issues +- Understanding data usage patterns +- Optimizing system performance -## Benefits +### Problem Diagnosis -### For Developers +Allow: -- **Faster Development**: Create databases using natural language -- **Reduced Errors**: AI-assisted schema design -- **Better Documentation**: Automatic generation of models and docs -- **Consistent Patterns**: Standardized database operations +- Investigating data-related issues +- Tracking down the source of problems +- Analyzing the impact of changes +- Supporting troubleshooting efforts -### For AI Tools +## Benefits for AI Agent Systems -- **Database Context**: Understanding of your specific schema -- **Safe Operations**: Protected database access -- **Natural Interface**: Human-like database interactions -- **Immediate Feedback**: Real-time schema and data access +### Comprehensive Tracking + +- **Complete Audit Trail**: Full record of all data changes and agent actions +- **Natural Language Queries**: Query change data using natural language +- **Pattern Analysis**: Analyze patterns in data changes and agent behavior +- **Impact Assessment**: Understand the impact of changes on system behavior + +### Monitoring and Analysis + +- **Real-time Logging**: Log changes as they occur +- **Historical Analysis**: Analyze historical change patterns +- **Anomaly Detection**: Identify unusual or suspicious activity +- **Performance Monitoring**: Track system performance over time + +### Flexible Architecture + +- **Database Storage**: Store change data in structured database format +- **REST API Access**: Access change data through auto-generated APIs +- **Flexible Schema**: Adapt to different monitoring needs +- **Integration Support**: Easy integration with existing systems + +## Important Limitations + +### What This Approach Does NOT Provide + +- **Automated Alerts**: No automatic alert or notification system +- **Real-time Monitoring**: No real-time monitoring or alerting capabilities +- **Threshold Management**: No automatic threshold monitoring +- **Workflow Automation**: No automated response to changes + +### External Integration Required + +For complete monitoring solutions, you'll need: + +- **Monitoring Tools**: Use external monitoring and alerting tools +- **Notification Systems**: Implement notification systems separately +- **Workflow Automation**: Use external workflow automation tools +- **Dashboard Tools**: Use external dashboard and visualization tools + +## Best Practices + +### Change Tracking Design + +- **Comprehensive Logging**: Log all relevant changes and actions +- **Consistent Format**: Use consistent format for change records +- **Appropriate Detail**: Include appropriate level of detail in logs +- **Performance Consideration**: Consider performance impact of logging + +### Data Management + +- **Retention Policies**: Implement appropriate data retention policies +- **Archive Strategy**: Archive old change data as needed +- **Data Quality**: Maintain high-quality change data +- **Privacy Considerations**: Consider privacy requirements for change data -### For Teams +### Analysis and Reporting -- **Shared Knowledge**: AI tools understand team's database -- **Consistent Approach**: Standardized database operations -- **Easy Onboarding**: New team members can use AI tools -- **Collaborative Design**: AI-assisted schema discussions +- **Regular Analysis**: Regularly analyze change data for insights +- **Trend Monitoring**: Monitor trends in data changes +- **Anomaly Detection**: Look for unusual patterns or anomalies +- **Actionable Insights**: Focus on actionable insights from change data ## Getting Started -1. **Set up GibsonAI Project**: Create your GibsonAI project and get API keys -2. **Configure MCP Server**: Install and configure the GibsonAI MCP server -3. **Connect AI Tools**: Configure your AI tools to use the MCP server -4. **Test Integration**: Try natural language database operations -5. **Build Your Schema**: Use AI tools to create and manage your database - -## Security Considerations - -- **API Key Management**: Secure storage of API keys -- **Project Isolation**: MCP server access scoped to specific project -- **Safe Operations**: Built-in protections for destructive operations -- **Audit Trail**: Track all operations performed through MCP server +1. **Design Change Schema**: Plan your change tracking database structure +2. **Create Database**: Use Gibson CLI to create change tracking schema +3. **Implement Tracking**: Add change tracking to your agent systems +4. **Analyze Data**: Use natural language queries to analyze change data +5. **Monitor and Improve**: Continuously monitor and improve your tracking -## Troubleshooting +## Gibson CLI Commands -### Common Issues +```bash +# Create change tracking schema +gibson modify table_name "description of change tracking table" +gibson code models +gibson merge -- **Connection Problems**: Check API key and project ID configuration -- **Permission Errors**: Verify API key has necessary permissions -- **Schema Conflicts**: Ensure schema changes don't conflict with existing data -- **Tool Compatibility**: Verify AI tool supports MCP protocol +# Generate models for change tracking +gibson code models +gibson code schemas +``` -### Best Practices +## Sample Schema for Change Tracking -- **Start Small**: Begin with simple schema operations -- **Test Changes**: Validate schema changes before applying -- **Use Dev Mode**: Enable Gibson dev mode for automatic code generation -- **Monitor Usage**: Track MCP server usage and performance +```python +# Basic change tracking schema +change_tables = { + "data_changes": "Create data_changes table with id, table_name, record_id, change_type, old_value, new_value, changed_by, timestamp", + "agent_actions": "Create agent_actions table with id, agent_id, action_type, target_table, target_id, action_data, timestamp", + "system_events": "Create system_events table with id, event_type, event_data, source, severity, timestamp" +} +``` -Ready to connect your AI tools to GibsonAI? [Get started with the MCP server setup](/ai/mcp-server). +Ready to implement data change tracking for your AI agent systems? [Get started with GibsonAI](/get-started/signing-up). --- -title: Database environments for multi-agent applications -subtitle: Create isolated database environments for different AI agents or agent workflows +title: Database queries for external dashboards and visualization tools +subtitle: Use GibsonAI's text-to-SQL capabilities to power external dashboards and visualization tools enableTableOfContents: true updatedOn: '2025-01-08T00:00:00.000Z' --- -Create isolated database environments for different AI agents or agent workflows. Use GibsonAI's natural language database management to set up separate schemas and data access for different agent applications. +Use GibsonAI's text-to-SQL capabilities to power external dashboards and visualization tools. Create database schemas and query them with natural language to feed data into tools like Retool, Grafana, or custom dashboards. -MCP Integration +MCP Integration -Database Management +Database & Queries CLI Tools @@ -6138,757 +6224,753 @@ Create isolated database environments for different AI agents or agent workflows ## Key Features -### Project-Based Isolation +### Natural Language to SQL -- **Separate Projects**: Create different GibsonAI projects for different agents -- **Independent Schemas**: Each project has its own database schema -- **Isolated Data**: Complete data separation between agent applications -- **Individual APIs**: Each project gets its own REST API endpoints +- **Text-to-SQL Queries**: Convert natural language questions to SQL queries +- **Complex Queries**: Handle multi-table joins and aggregations +- **Safe Execution**: Protected query execution with built-in safeguards +- **Flexible Results**: Return results in formats suitable for dashboards -### Natural Language Management +### Database Schema Management -- **Schema Creation**: Define database schemas using natural language +- **Schema Creation**: Create database schemas using natural language - **Table Management**: Add and modify tables with simple prompts - **Relationship Building**: Define relationships between tables naturally -- **Data Type Selection**: Automatically choose appropriate data types +- **Data Type Handling**: Automatically select appropriate data types + +### REST API Integration + +- **Auto-Generated APIs**: REST endpoints for all database tables +- **Query Endpoint**: Dedicated endpoint for natural language queries +- **JSON Responses**: Consistent JSON format for easy integration +- **Authentication**: Secure API access with authentication ## Implementation Examples -### Creating Separate Agent Databases +### Setting Up Database for Dashboard Data ```python -# Using Gibson CLI to create database for Agent A -# gibson modify user_sessions "Create a user sessions table for chatbot conversations" -# gibson modify conversation_history "Create conversation history with user_id, message, response, timestamp" -# gibson code models -# gibson merge +# Using Gibson CLI to create database schema for dashboard data +# Create analytics tables +# gibson modify page_views "Create page_views table with id, page_url, user_id, timestamp, session_id" +# gibson modify user_sessions "Create user_sessions table with id, user_id, start_time, end_time, device_type" +# gibson modify conversion_events "Create conversion_events table with id, user_id, event_type, value, timestamp" +# gibson modify user_metrics "Create user_metrics table with user_id, metric_name, value, date" -# For Agent B (different project) -# gibson modify product_catalog "Create a product catalog table with name, description, price, category" -# gibson modify user_preferences "Create user preferences table with user_id, preferred_categories, budget_range" +# Generate models and apply changes # gibson code models # gibson merge ``` -### Text-to-SQL for Different Agents +### Querying Data for Dashboard Widgets ```python -# Agent A: Chatbot querying conversation data import requests -query_request = { - "query": "Show me all conversations from today where users asked about pricing" -} +class DashboardDataProvider: + def __init__(self, api_key): + self.api_key = api_key + self.base_url = "https://api.gibsonai.com/v1/-" + self.headers = {"Authorization": f"Bearer {api_key}"} -response = requests.post( - "https://api.gibsonai.com/v1/-/query", - json=query_request, - headers={"Authorization": "Bearer agent_a_api_key"} -) + def get_daily_active_users(self): + """Get daily active users for dashboard""" + query_request = { + "query": "Count unique users by date for the last 30 days" + } -# Agent B: E-commerce agent querying product data -query_request = { - "query": "Find all products under $50 in electronics category" -} + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) -response = requests.post( - "https://api.gibsonai.com/v1/-/query", - json=query_request, - headers={"Authorization": "Bearer agent_b_api_key"} -) + if response.status_code == 200: + return response.json() + else: + print(f"Query failed: {response.status_code}") + return None + + def get_page_view_stats(self): + """Get page view statistics""" + query_request = { + "query": "Show page views by URL for the last 7 days with total counts" + } + + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) + + if response.status_code == 200: + return response.json() + else: + print(f"Query failed: {response.status_code}") + return None + + def get_conversion_funnel(self): + """Get conversion funnel data""" + query_request = { + "query": "Calculate conversion funnel from page views to conversions by event type" + } + + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) + + if response.status_code == 200: + return response.json() + else: + print(f"Query failed: {response.status_code}") + return None + + def get_user_engagement_metrics(self): + """Get user engagement metrics""" + query_request = { + "query": "Calculate average session duration and pages per session by device type" + } + + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) + + if response.status_code == 200: + return response.json() + else: + print(f"Query failed: {response.status_code}") + return None ``` -### Using Different API Endpoints +### Integrating with External Dashboard Tools ```python -# Agent A accessing conversation data -response = requests.get( - "https://api.gibsonai.com/v1/-/conversation-history", - headers={"Authorization": "Bearer agent_a_api_key"} -) +# Example integration with Retool +class RetoolIntegration: + def __init__(self, gibson_api_key): + self.dashboard_provider = DashboardDataProvider(gibson_api_key) -# Agent B accessing product data -response = requests.get( - "https://api.gibsonai.com/v1/-/product-catalog", - headers={"Authorization": "Bearer agent_b_api_key"} -) + def get_dashboard_data(self, widget_type): + """Get data for specific dashboard widget""" + if widget_type == "daily_active_users": + return self.dashboard_provider.get_daily_active_users() + elif widget_type == "page_views": + return self.dashboard_provider.get_page_view_stats() + elif widget_type == "conversion_funnel": + return self.dashboard_provider.get_conversion_funnel() + elif widget_type == "engagement_metrics": + return self.dashboard_provider.get_user_engagement_metrics() + else: + return None -# Creating new records for different agents -# Agent A creates new conversation -new_conversation = { - "user_id": "user123", - "message": "What are your pricing plans?", - "response": "We offer three plans: Basic, Pro, and Enterprise...", - "timestamp": "2024-01-15T10:30:00Z" -} +# Example integration with Grafana +class GrafanaIntegration: + def __init__(self, gibson_api_key): + self.dashboard_provider = DashboardDataProvider(gibson_api_key) + + def get_time_series_data(self, metric_name, time_range): + """Get time series data for Grafana""" + query_request = { + "query": f"Get {metric_name} values over time for the last {time_range}" + } + + response = requests.post( + f"{self.dashboard_provider.base_url}/query", + json=query_request, + headers=self.dashboard_provider.headers + ) -response = requests.post( - "https://api.gibsonai.com/v1/-/conversation-history", - json=new_conversation, - headers={"Authorization": "Bearer agent_a_api_key"} -) + if response.status_code == 200: + # Format data for Grafana + data = response.json() + return self.format_for_grafana(data) + else: + return None + + def format_for_grafana(self, data): + """Format data for Grafana consumption""" + # Convert to Grafana time series format + return { + "target": "metric_name", + "datapoints": [[value, timestamp] for value, timestamp in data] + } ``` -## Use Cases +### Creating Custom Dashboard API -### Agent Specialization +```python +from flask import Flask, jsonify, request +import requests -Perfect for scenarios where different agents need different data: +app = Flask(__name__) -- **Customer Service Agent**: Stores conversation history and user issues -- **E-commerce Agent**: Manages product catalog and purchase history -- **Content Agent**: Handles articles, media, and content recommendations -- **Analytics Agent**: Tracks metrics, performance, and insights +class CustomDashboardAPI: + def __init__(self, gibson_api_key): + self.gibson_api_key = gibson_api_key + self.base_url = "https://api.gibsonai.com/v1/-" + self.headers = {"Authorization": f"Bearer {gibson_api_key}"} -### Development Environments + def execute_query(self, query): + """Execute natural language query""" + query_request = {"query": query} -Create separate databases for: + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) -- **Development**: Test new agent features -- **Staging**: Validate agent behavior before production -- **Production**: Live agent operations -- **Training**: Historical data for agent training + if response.status_code == 200: + return response.json() + else: + return None -### Agent Collaboration +dashboard_api = CustomDashboardAPI("your_gibson_api_key") -Enable agents to: +@app.route('/api/dashboard/users/daily') +def daily_users(): + """Get daily user metrics""" + data = dashboard_api.execute_query("Count unique users by date for the last 30 days") + return jsonify(data) -- Share specific data through controlled interfaces -- Maintain their own specialized datasets -- Query each other's data when needed -- Maintain data consistency across workflows +@app.route('/api/dashboard/conversions') +def conversions(): + """Get conversion data""" + data = dashboard_api.execute_query("Show conversion events by type for the last 7 days") + return jsonify(data) -## Schema Management +@app.route('/api/dashboard/query') +def custom_query(): + """Execute custom query from dashboard""" + query = request.args.get('q') + if not query: + return jsonify({"error": "Query parameter 'q' is required"}), 400 -### Independent Schema Evolution + data = dashboard_api.execute_query(query) + if data: + return jsonify(data) + else: + return jsonify({"error": "Query failed"}), 500 +``` + +## Dashboard Integration Examples + +### Business Intelligence Dashboard ```python -# Agent A evolving its schema -# gibson modify conversation_history "Add sentiment_score column to track user satisfaction" -# gibson modify user_sessions "Add session_duration and interaction_count fields" +# Create schema for business intelligence +# gibson modify sales_data "Create sales_data table with id, product_id, sales_amount, sales_date, region" +# gibson modify product_catalog "Create product_catalog table with id, name, category, price, cost" +# gibson modify customer_segments "Create customer_segments table with customer_id, segment, value_score" # gibson code models # gibson merge -# Agent B evolving its schema independently -# gibson modify product_catalog "Add inventory_count and supplier_info columns" -# gibson modify user_preferences "Add notification_settings and purchase_history_summary" -# gibson code models -# gibson merge -``` +def get_business_intelligence_data(): + """Get data for business intelligence dashboard""" -### Data Model Generation + # Revenue by region + revenue_by_region = { + "query": "Calculate total revenue by region for the last quarter" + } -Each agent gets its own Python models: + # Top performing products + top_products = { + "query": "Show top 10 products by sales volume for the last month" + } -- **SQLAlchemy Models**: For database operations -- **Pydantic Schemas**: For data validation -- **Independent Updates**: Models update independently per agent + # Customer segment analysis + customer_analysis = { + "query": "Analyze customer segments by average order value and frequency" + } -## MCP Server Integration + return { + "revenue_by_region": revenue_by_region, + "top_products": top_products, + "customer_analysis": customer_analysis + } +``` -Connect different AI tools to different databases: +### Operational Dashboard -- **Agent-Specific Access**: Each agent connects to its own database -- **Natural Language Operations**: Use natural language for database operations -- **Contextual Queries**: AI tools understand the specific agent context -- **Secure Separation**: Each agent operates within its own data boundaries +```python +# Create schema for operational metrics +# gibson modify system_metrics "Create system_metrics table with id, metric_name, value, timestamp, server_id" +# gibson modify error_logs "Create error_logs table with id, error_type, message, timestamp, severity" +# gibson modify user_activity "Create user_activity table with id, user_id, action, timestamp, success" +# gibson code models +# gibson merge -## Benefits for Multi-Agent Applications +def get_operational_dashboard_data(): + """Get data for operational dashboard""" -- **Data Isolation**: Complete separation between agent data -- **Independent Development**: Agents can evolve independently -- **Specialized Schemas**: Each agent has optimal data structure -- **Scalable Architecture**: Add new agents without affecting existing ones -- **Natural Language Control**: Manage all databases using natural language -- **Automatic APIs**: Each agent gets its own REST API endpoints + # System performance metrics + system_performance = { + "query": "Show average response time and error rate for the last hour" + } -## Getting Started + # Error analysis + error_analysis = { + "query": "Count errors by type and severity for the last 24 hours" + } -1. **Create Projects**: Set up separate GibsonAI projects for each agent -2. **Define Schemas**: Use natural language to create database schemas -3. **Generate Models**: Create Python models for each agent -4. **Connect Agents**: Connect each agent to its specific database -5. **Iterate**: Evolve each agent's database as needs change + # User activity patterns + user_activity = { + "query": "Analyze user activity patterns and success rates" + } + return { + "system_performance": system_performance, + "error_analysis": error_analysis, + "user_activity": user_activity + } +``` ---- -title: Database environments for testing AI agent behavior -subtitle: Use isolated database environments to test and track AI agent behavior and performance -enableTableOfContents: true -updatedOn: '2025-01-08T00:00:00.000Z' ---- +## Use Cases -Use isolated database environments to test and track AI agent behavior and performance. Create separate database schemas for testing different agent configurations and compare results using natural language queries. +### Analytics Dashboards -## How it works +Perfect for: -GibsonAI provides database environments where you can test different agent configurations, track their performance, and analyze behavior patterns. Create isolated databases for each test scenario and use natural language queries to analyze results. +- Web analytics and user behavior tracking +- E-commerce performance monitoring +- Marketing campaign effectiveness +- User engagement and retention metrics - +### Business Intelligence -MCP Integration +Enable: -Database Management +- Sales performance tracking +- Revenue analysis by segments +- Product performance metrics +- Customer lifetime value analysis -CLI Tools +### Operational Monitoring - +Support: -## Key Features +- System performance metrics +- Error tracking and analysis +- User activity monitoring +- Application health dashboards -### Isolated Testing Environments +### Custom Visualizations -- **Separate Databases**: Create isolated databases for each agent test -- **Independent Schemas**: Independent database schemas for different experiments -- **Safe Testing**: Test agent behavior without affecting production data -- **Environment Comparison**: Compare results across different test environments +Allow: -### Agent Performance Tracking +- Custom chart creation based on specific queries +- Real-time data visualization +- Interactive dashboard elements +- Dynamic filtering and drill-down capabilities -- **Behavior Logging**: Track agent actions and decisions in structured format -- **Performance Metrics**: Store and analyze agent performance data -- **Response Tracking**: Log agent responses and their effectiveness -- **Error Monitoring**: Track errors and failure patterns +## Benefits for Dashboard Development -### Natural Language Analysis +### Rapid Data Access -- **Query Testing Results**: Use natural language to analyze test results -- **Performance Comparison**: Compare agent performance across different scenarios -- **Behavior Analysis**: Analyze agent behavior patterns and trends -- **Results Reporting**: Generate reports on agent testing outcomes +- **Natural Language**: Query data using natural language instead of complex SQL +- **Instant APIs**: Auto-generated REST APIs for immediate data access +- **Flexible Queries**: Handle complex analytical queries easily +- **Real-time Data**: Access current data for live dashboards -## Use Cases +### Easy Integration -### Agent Development +- **Standard APIs**: REST APIs work with any dashboard tool +- **JSON Format**: Consistent JSON responses for easy parsing +- **Authentication**: Secure API access with built-in authentication +- **Documentation**: Auto-generated API documentation -Perfect for: +### Scalable Architecture -- Testing new agent features and capabilities -- Validating agent behavior in different scenarios -- Comparing different agent configurations -- Debugging agent issues and problems +- **Database Management**: Easily modify schema as dashboard needs evolve +- **Performance**: Optimized queries for dashboard performance +- **Security**: Secure data access with proper authentication +- **Reliability**: Robust database infrastructure + +## Best Practices + +### Query Optimization -### Performance Optimization +- **Specific Queries**: Use specific natural language queries for better performance +- **Time Ranges**: Include appropriate time ranges in queries +- **Indexing**: Ensure proper indexing for frequently queried data +- **Caching**: Implement caching for frequently accessed dashboard data -Enable: +### Dashboard Design -- Identifying performance bottlenecks -- Testing different optimization strategies -- Measuring impact of configuration changes -- Validating performance improvements +- **Clear Metrics**: Choose clear and meaningful metrics for visualization +- **Appropriate Visualizations**: Select appropriate chart types for data +- **User Experience**: Design intuitive and responsive dashboards +- **Performance**: Optimize dashboard loading times -### Behavior Validation +### Data Management -Support: +- **Data Quality**: Ensure high-quality data for accurate dashboards +- **Regular Updates**: Keep dashboard data current and relevant +- **Backup Strategy**: Implement proper data backup and recovery +- **Monitoring**: Monitor dashboard performance and usage -- Ensuring agent responses are appropriate -- Testing edge cases and error handling -- Validating decision-making logic -- Confirming compliance with requirements +## Getting Started -## Implementation Examples +1. **Create Database Schema**: Define your data structure using natural language +2. **Generate Models**: Create Python models with Gibson CLI +3. **Populate Data**: Add sample or real data to your database +4. **Test Queries**: Validate your natural language queries +5. **Integrate with Dashboard Tool**: Connect your preferred dashboard tool to GibsonAI APIs -### Setting Up Agent Testing Environment +## Gibson CLI Commands -```python -# Using Gibson CLI to create agent testing database -# Create agent testing tables -# gibson modify agent_tests "Create agent_tests table with id, test_name, agent_config, environment, created_at" -# gibson modify agent_actions "Create agent_actions table with id, test_id, action_type, input_data, output_data, timestamp, duration" -# gibson modify agent_metrics "Create agent_metrics table with id, test_id, metric_name, value, timestamp" -# gibson modify test_results "Create test_results table with id, test_id, result_type, data, success, error_message" +```bash +# Create database schema for dashboard data +gibson modify table_name "description of table structure" +gibson code models +gibson merge -# Generate models and apply changes -# gibson code models -# gibson merge +# Generate models for dashboard integration +gibson code models +gibson code schemas ``` -### Agent Testing Framework +## Supported Dashboard Tools -```python -import requests -import json -from datetime import datetime -import time +- **Retool**: Low-code dashboard builder +- **Grafana**: Time series visualization +- **Tableau**: Business intelligence platform +- **Power BI**: Microsoft's business analytics tool +- **Custom Dashboards**: Build your own using web frameworks -class AgentTester: - def __init__(self, api_key, environment="test"): - self.api_key = api_key - self.environment = environment - self.base_url = "https://api.gibsonai.com/v1/-" - self.headers = {"Authorization": f"Bearer {api_key}"} +Ready to create database-powered dashboards? [Get started with GibsonAI](/get-started/signing-up). - def create_test(self, test_name, agent_config): - """Create a new agent test""" - test_data = { - "test_name": test_name, - "agent_config": agent_config, - "environment": self.environment, - "created_at": datetime.now().isoformat() - } - response = requests.post( - f"{self.base_url}/agent-tests", - json=test_data, - headers=self.headers - ) +--- +title: Connect AI tools to GibsonAI through MCP server +subtitle: Use the GibsonAI MCP server to connect AI tools and agents to your databases +enableTableOfContents: true +updatedOn: '2025-01-08T00:00:00.000Z' +--- - if response.status_code == 201: - test_record = response.json() - print(f"Created test: {test_name}") - return test_record["id"] - else: - print(f"Failed to create test: {response.status_code}") - return None +Use the GibsonAI MCP server to connect AI tools and agents to your databases. The Model Context Protocol (MCP) server provides a standardized way for AI tools to interact with your GibsonAI databases using natural language. - def log_agent_action(self, test_id, action_type, input_data, output_data, duration): - """Log an agent action during testing""" - action_data = { - "test_id": test_id, - "action_type": action_type, - "input_data": input_data, - "output_data": output_data, - "timestamp": datetime.now().isoformat(), - "duration": duration - } +## How it works - response = requests.post( - f"{self.base_url}/agent-actions", - json=action_data, - headers=self.headers - ) +The GibsonAI MCP server allows AI tools like GitHub Copilot, Cursor, Claude, and other AI assistants to interact with your databases through natural language. It provides secure, contextual access to your data and schema management capabilities. - if response.status_code == 201: - return response.json() - else: - print(f"Failed to log action: {response.status_code}") - return None + - def record_metric(self, test_id, metric_name, value): - """Record a performance metric""" - metric_data = { - "test_id": test_id, - "metric_name": metric_name, - "value": value, - "timestamp": datetime.now().isoformat() - } +MCP Server - response = requests.post( - f"{self.base_url}/agent-metrics", - json=metric_data, - headers=self.headers - ) +MCP Client Connection - if response.status_code == 201: - return response.json() - else: - print(f"Failed to record metric: {response.status_code}") - return None +Database Management - def log_test_result(self, test_id, result_type, data, success, error_message=None): - """Log test result""" - result_data = { - "test_id": test_id, - "result_type": result_type, - "data": data, - "success": success, - "error_message": error_message - } + - response = requests.post( - f"{self.base_url}/test-results", - json=result_data, - headers=self.headers - ) +## Key Features - if response.status_code == 201: - return response.json() - else: - print(f"Failed to log result: {response.status_code}") - return None -``` +### Natural Language Database Operations -### Testing Different Agent Configurations +- **Schema Management**: Create and modify database schemas using natural language +- **Data Querying**: Execute text-to-SQL queries through AI tools +- **Table Operations**: Add, modify, and remove tables with simple prompts +- **Relationship Building**: Define relationships between tables naturally -```python -class AgentBehaviorTester: - def __init__(self, api_key): - self.tester = AgentTester(api_key) +### AI Tool Integration - def test_response_configurations(self): - """Test different agent response configurations""" +- **GitHub Copilot**: Use with VS Code and GitHub Copilot +- **Cursor**: Integrate with Cursor AI editor +- **Claude**: Connect with Claude AI assistant +- **Custom Tools**: Connect any MCP-compatible AI tool - # Test Configuration A: Conservative responses - config_a = { - "response_style": "conservative", - "confidence_threshold": 0.8, - "escalation_enabled": True - } +### Secure Database Access - test_a_id = self.tester.create_test("Conservative Response Test", config_a) +- **Authentication**: Secure API key-based authentication +- **Project Context**: AI tools understand your specific database schema +- **Safe Operations**: Built-in protections for data integrity +- **Scoped Access**: Access limited to your specific GibsonAI project - # Test Configuration B: Assertive responses - config_b = { - "response_style": "assertive", - "confidence_threshold": 0.6, - "escalation_enabled": False - } +## Implementation Examples - test_b_id = self.tester.create_test("Assertive Response Test", config_b) +### Setting up MCP Server - # Run tests with same scenarios - test_scenarios = [ - {"user_input": "I need help with my order", "expected_action": "order_lookup"}, - {"user_input": "I want to cancel my subscription", "expected_action": "cancellation_process"}, - {"user_input": "This product is defective", "expected_action": "refund_process"} - ] +```json +{ + "mcpServers": { + "gibsonai": { + "command": "npx", + "args": ["@gibsonai/mcp-server"], + "env": { + "GIBSON_API_KEY": "your_api_key_here", + "GIBSON_PROJECT_ID": "your_project_id" + } + } + } +} +``` - for scenario in test_scenarios: - # Test Configuration A - self.run_test_scenario(test_a_id, scenario, config_a) +### Using with GitHub Copilot - # Test Configuration B - self.run_test_scenario(test_b_id, scenario, config_b) +```python +# Example: Creating a database schema through GitHub Copilot +# Prompt: "Create a user management system with users, roles, and permissions" - def run_test_scenario(self, test_id, scenario, config): - """Run a single test scenario""" - start_time = time.time() +# The MCP server will help GitHub Copilot understand your request and: +# 1. Create the database schema +# 2. Generate the appropriate tables +# 3. Set up relationships between tables +# 4. Generate Python models - # Simulate agent processing - try: - # Mock agent response based on configuration - if config["response_style"] == "conservative": - response = self.generate_conservative_response(scenario["user_input"]) - else: - response = self.generate_assertive_response(scenario["user_input"]) +# gibson modify users "Create a users table with id, username, email, password_hash, created_at" +# gibson modify roles "Create a roles table with id, name, description, permissions" +# gibson modify user_roles "Create a user_roles table to link users and roles" +# gibson code models +# gibson merge +``` + +### Natural Language Database Queries - duration = time.time() - start_time +```python +# Example queries through MCP server +# AI tool prompts that the MCP server can handle: - # Log the action - self.tester.log_agent_action( - test_id, - "user_interaction", - scenario["user_input"], - response, - duration - ) +# "Show me all users who registered in the last 30 days" +# "Create a new user with email john@example.com" +# "Update the user table to add a last_login column" +# "Find all orders with status 'pending' and total > 100" - # Record metrics - self.tester.record_metric(test_id, "response_time", duration) - self.tester.record_metric(test_id, "confidence_score", response.get("confidence", 0)) +# The MCP server translates these to appropriate Gibson commands or API calls +``` - # Log result - success = response.get("action") == scenario["expected_action"] - self.tester.log_test_result( - test_id, - "scenario_test", - {"scenario": scenario, "response": response}, - success - ) +### Schema Evolution Example - except Exception as e: - # Log error - self.tester.log_test_result( - test_id, - "scenario_test", - {"scenario": scenario, "error": str(e)}, - False, - str(e) - ) +```python +# Using AI tools to evolve your database schema - def generate_conservative_response(self, user_input): - """Generate conservative agent response""" - # Mock conservative response logic - return { - "response": "I'd be happy to help you with that. Let me connect you with a specialist.", - "action": "escalate", - "confidence": 0.9 - } +# Prompt: "Add a subscription feature to the user system" +# MCP server helps generate: - def generate_assertive_response(self, user_input): - """Generate assertive agent response""" - # Mock assertive response logic - return { - "response": "I can help you with that right away. Let me process your request.", - "action": "direct_action", - "confidence": 0.7 - } +# gibson modify subscriptions "Create a subscriptions table with id, user_id, plan_type, status, start_date, end_date" +# gibson modify users "Add subscription_id column to users table" +# gibson code models +# gibson merge ``` -### Analyzing Test Results +## AI Tool Capabilities -```python -class TestResultAnalyzer: - def __init__(self, api_key): - self.api_key = api_key - self.base_url = "https://api.gibsonai.com/v1/-" - self.headers = {"Authorization": f"Bearer {api_key}"} +### Schema Creation - def compare_test_performance(self, test_a_name, test_b_name): - """Compare performance between two tests""" - query_request = { - "query": f"Compare average response time and success rate between tests named '{test_a_name}' and '{test_b_name}'" - } +AI tools can help you: - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +- Design database schemas from natural language descriptions +- Create tables with appropriate data types +- Define relationships between tables +- Generate indexes and constraints - if response.status_code == 200: - results = response.json() - print("Test Performance Comparison:") - for result in results: - print(f" {result}") - return results - else: - print(f"Analysis failed: {response.status_code}") - return None +### Data Operations - def analyze_agent_behavior_patterns(self, test_id): - """Analyze agent behavior patterns in a test""" - query_request = { - "query": f"Analyze action types and response patterns for test ID {test_id}" - } +Enable AI tools to: - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +- Query data using natural language +- Insert new records +- Update existing data +- Generate reports and analytics - if response.status_code == 200: - results = response.json() - print(f"Behavior Analysis for Test {test_id}:") - for result in results: - print(f" {result}") - return results - else: - print(f"Analysis failed: {response.status_code}") - return None +### Model Generation - def get_error_analysis(self, test_id): - """Get error analysis for a test""" - query_request = { - "query": f"Show all errors and failure patterns for test ID {test_id}" - } +Automatically generate: - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +- SQLAlchemy models +- Pydantic schemas +- Database migration scripts +- API documentation - if response.status_code == 200: - results = response.json() - print(f"Error Analysis for Test {test_id}:") - for result in results: - print(f" {result}") - return results - else: - print(f"Analysis failed: {response.status_code}") - return None +## MCP Server Commands - def generate_test_report(self, test_name): - """Generate comprehensive test report""" - query_request = { - "query": f"Generate a comprehensive report for test '{test_name}' including performance metrics, success rates, and error analysis" - } +### Database Schema Operations - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +```bash +# Commands the MCP server can execute: - if response.status_code == 200: - results = response.json() - print(f"Test Report for {test_name}:") - for result in results: - print(f" {result}") - return results - else: - print(f"Report generation failed: {response.status_code}") - return None +# Create/modify tables +gibson modify table_name "description of changes" + +# Generate models +gibson code models +gibson code schemas + +# Apply changes +gibson merge + +# Build database +gibson build datastore ``` -### A/B Testing Example +### Query Operations ```python -class ABTestingFramework: - def __init__(self, api_key): - self.tester = AgentTester(api_key) - self.analyzer = TestResultAnalyzer(api_key) - - def run_ab_test(self, test_name, config_a, config_b, scenarios): - """Run A/B test with two configurations""" +# Text-to-SQL through MCP server +# AI tools can generate queries like: - # Create tests for both configurations - test_a_id = self.tester.create_test(f"{test_name}_A", config_a) - test_b_id = self.tester.create_test(f"{test_name}_B", config_b) +import requests - # Run scenarios for both configurations - for scenario in scenarios: - # Test Configuration A - self.run_scenario_test(test_a_id, scenario, config_a) +response = requests.post( + "https://api.gibsonai.com/v1/-/query", + json={"query": "Show me all active users with their last login"}, + headers={"Authorization": "Bearer your_api_key"} +) +``` - # Test Configuration B - self.run_scenario_test(test_b_id, scenario, config_b) +## Integration Examples - # Analyze results - print(f"\nA/B Test Results for {test_name}:") - self.analyzer.compare_test_performance(f"{test_name}_A", f"{test_name}_B") +### VS Code with GitHub Copilot - return test_a_id, test_b_id +```json +// settings.json configuration for MCP server +{ + "github.copilot.enable": { + "*": true, + "plaintext": true, + "markdown": true + }, + "github.copilot.advanced": { + "listCount": 10, + "inlineSuggestCount": 3 + } +} +``` - def run_scenario_test(self, test_id, scenario, config): - """Run a single scenario test""" - start_time = time.time() +### Cursor AI Integration - try: - # Simulate agent processing based on configuration - response = self.simulate_agent_response(scenario, config) - duration = time.time() - start_time +```python +# Example workflow in Cursor +# 1. Describe your database needs in natural language +# 2. Cursor uses MCP server to understand GibsonAI capabilities +# 3. Generates appropriate Gibson CLI commands +# 4. Creates Python models and API code - # Log action - self.tester.log_agent_action( - test_id, - "scenario_test", - scenario, - response, - duration - ) +# "I need a blog system with posts, authors, and comments" +# Cursor + MCP server generates: +# - Database schema +# - SQLAlchemy models +# - REST API endpoints +# - Sample queries +``` - # Record metrics - self.tester.record_metric(test_id, "response_time", duration) - self.tester.record_metric(test_id, "confidence_score", response.get("confidence", 0)) +### Claude AI Integration - # Determine success - success = response.get("error") is None +```python +# Example conversation with Claude using MCP server +# User: "Help me create a customer management system" +# Claude: "I'll help you create a customer management system using GibsonAI" - # Log result - self.tester.log_test_result( - test_id, - "ab_test_scenario", - {"scenario": scenario, "response": response}, - success, - response.get("error") - ) +# Claude generates Gibson commands: +# gibson modify customers "Create customers table with name, email, phone, address" +# gibson modify orders "Create orders table with customer_id, total, status, created_at" +# gibson modify order_items "Create order_items table with order_id, product_id, quantity, price" +# gibson code models +# gibson merge +``` - except Exception as e: - # Log error - self.tester.log_test_result( - test_id, - "ab_test_scenario", - {"scenario": scenario, "error": str(e)}, - False, - str(e) - ) +## Use Cases - def simulate_agent_response(self, scenario, config): - """Simulate agent response based on configuration""" - # Mock agent response logic - if config.get("response_style") == "detailed": - return { - "response": "I'll provide detailed help with your request...", - "confidence": 0.85, - "action": "detailed_response" - } - else: - return { - "response": "I can help with that.", - "confidence": 0.75, - "action": "brief_response" - } -``` +### Rapid Prototyping -## Benefits for AI Agent Testing +Perfect for: -### Comprehensive Testing +- Quickly creating database schemas for new projects +- Testing different data models +- Generating sample data and queries +- Validating database designs -- **Isolated Environments**: Test different configurations without interference -- **Structured Data**: Organized test data for easy analysis -- **Natural Language Analysis**: Query test results using natural language -- **Performance Tracking**: Track agent performance over time +### AI-Assisted Development -### Data-Driven Insights +Enable: -- **Behavior Analysis**: Analyze agent behavior patterns and trends -- **Performance Comparison**: Compare different agent configurations -- **Error Identification**: Identify and analyze error patterns -- **Optimization Guidance**: Data-driven insights for agent improvement +- Natural language database operations +- Automated model generation +- Schema evolution guidance +- Query optimization suggestions -### Scalable Testing +### Team Collaboration -- **Multiple Environments**: Test multiple configurations simultaneously -- **Flexible Schema**: Adapt database schema to different testing needs -- **API Integration**: Easy integration with existing testing workflows -- **Automated Analysis**: Automated analysis and reporting capabilities +Support: -## Best Practices +- Shared database understanding through AI tools +- Consistent schema management +- Automated documentation generation +- Knowledge transfer between team members -### Test Design +## Benefits -- **Clear Objectives**: Define clear testing objectives and success criteria -- **Realistic Scenarios**: Use realistic test scenarios that match production usage -- **Controlled Variables**: Control variables to isolate the impact of changes -- **Comprehensive Coverage**: Test edge cases and error scenarios +### For Developers -### Data Management +- **Faster Development**: Create databases using natural language +- **Reduced Errors**: AI-assisted schema design +- **Better Documentation**: Automatic generation of models and docs +- **Consistent Patterns**: Standardized database operations -- **Consistent Logging**: Log all relevant data consistently across tests -- **Data Quality**: Ensure high-quality test data for accurate analysis -- **Version Control**: Track changes to test configurations and scenarios -- **Data Retention**: Implement appropriate data retention policies +### For AI Tools -### Analysis and Reporting +- **Database Context**: Understanding of your specific schema +- **Safe Operations**: Protected database access +- **Natural Interface**: Human-like database interactions +- **Immediate Feedback**: Real-time schema and data access -- **Regular Analysis**: Regularly analyze test results for insights -- **Comparative Analysis**: Compare results across different configurations -- **Trend Analysis**: Track performance trends over time -- **Actionable Insights**: Focus on actionable insights for improvement +### For Teams + +- **Shared Knowledge**: AI tools understand team's database +- **Consistent Approach**: Standardized database operations +- **Easy Onboarding**: New team members can use AI tools +- **Collaborative Design**: AI-assisted schema discussions ## Getting Started -1. **Design Test Schema**: Define your agent testing database schema -2. **Create Test Environment**: Set up isolated database for testing -3. **Implement Testing Framework**: Create framework for logging test data -4. **Run Tests**: Execute tests with different agent configurations -5. **Analyze Results**: Use natural language queries to analyze results +1. **Set up GibsonAI Project**: Create your GibsonAI project and get API keys +2. **Configure MCP Server**: Install and configure the GibsonAI MCP server +3. **Connect AI Tools**: Configure your AI tools to use the MCP server +4. **Test Integration**: Try natural language database operations +5. **Build Your Schema**: Use AI tools to create and manage your database -## Gibson CLI Commands +## Security Considerations -```bash -# Create agent testing schema -gibson modify table_name "description of testing table" -gibson code models -gibson merge +- **API Key Management**: Secure storage of API keys +- **Project Isolation**: MCP server access scoped to specific project +- **Safe Operations**: Built-in protections for destructive operations +- **Audit Trail**: Track all operations performed through MCP server -# Generate models for testing integration -gibson code models -gibson code schemas +## Troubleshooting -# Reset testing environment -gibson forget last -gibson build datastore -``` +### Common Issues -Ready to set up database environments for testing AI agent behavior? [Get started with GibsonAI](/get-started/signing-up). +- **Connection Problems**: Check API key and project ID configuration +- **Permission Errors**: Verify API key has necessary permissions +- **Schema Conflicts**: Ensure schema changes don't conflict with existing data +- **Tool Compatibility**: Verify AI tool supports MCP protocol + +### Best Practices + +- **Start Small**: Begin with simple schema operations +- **Test Changes**: Validate schema changes before applying +- **Use Dev Mode**: Enable Gibson dev mode for automatic code generation +- **Monitor Usage**: Track MCP server usage and performance + +Ready to connect your AI tools to GibsonAI? [Get started with the MCP server setup](/ai/mcp-server). --- -title: Database schemas and sample data for AI agent testing -subtitle: Create database schemas and populate with sample data for AI agent development and testing +title: Database environments for multi-agent applications +subtitle: Create isolated database environments for different AI agents or agent workflows enableTableOfContents: true updatedOn: '2025-01-08T00:00:00.000Z' --- -Create database schemas and populate with sample data for AI agent development and testing. Use GibsonAI's natural language database management to quickly set up test environments with realistic data structures. +Create isolated database environments for different AI agents or agent workflows. Use GibsonAI's natural language database management to set up separate schemas and data access for different agent applications. -MCP Integration +MCP Integration Database Management @@ -6898,332 +6980,224 @@ Create database schemas and populate with sample data for AI agent development a ## Key Features -### Natural Language Schema Creation +### Project-Based Isolation -- **Table Definition**: Create tables using natural language descriptions -- **Relationship Building**: Define relationships between tables naturally -- **Data Type Selection**: Automatically choose appropriate data types -- **Index Creation**: Add indexes for performance optimization +- **Separate Projects**: Create different GibsonAI projects for different agents +- **Independent Schemas**: Each project has its own database schema +- **Isolated Data**: Complete data separation between agent applications +- **Individual APIs**: Each project gets its own REST API endpoints -### Sample Data Creation +### Natural Language Management -- **Manual Data Entry**: Create sample records through REST APIs -- **Bulk Operations**: Insert multiple records at once -- **Realistic Scenarios**: Set up data that mirrors real-world use cases -- **Agent Testing**: Create data specifically for agent testing scenarios +- **Schema Creation**: Define database schemas using natural language +- **Table Management**: Add and modify tables with simple prompts +- **Relationship Building**: Define relationships between tables naturally +- **Data Type Selection**: Automatically choose appropriate data types ## Implementation Examples -### Creating Test Database Schema +### Creating Separate Agent Databases ```python -# Using Gibson CLI to create schema for agent testing -# Create user management system -# gibson modify users "Create users table with id, username, email, created_at, status" -# gibson modify user_profiles "Create user_profiles table with user_id, first_name, last_name, bio, avatar_url" -# gibson modify conversations "Create conversations table with id, user_id, agent_id, message, response, timestamp" -# gibson modify agent_metrics "Create agent_metrics table with agent_id, metric_name, value, recorded_at" +# Using Gibson CLI to create database for Agent A +# gibson modify user_sessions "Create a user sessions table for chatbot conversations" +# gibson modify conversation_history "Create conversation history with user_id, message, response, timestamp" +# gibson code models +# gibson merge -# Generate Python models +# For Agent B (different project) +# gibson modify product_catalog "Create a product catalog table with name, description, price, category" +# gibson modify user_preferences "Create user preferences table with user_id, preferred_categories, budget_range" # gibson code models # gibson merge ``` -### Populating with Sample Data +### Text-to-SQL for Different Agents ```python +# Agent A: Chatbot querying conversation data import requests -import random -from datetime import datetime, timedelta -class TestDataGenerator: - def __init__(self, api_key): - self.api_key = api_key - self.base_url = "https://api.gibsonai.com/v1/-" - self.headers = {"Authorization": f"Bearer {api_key}"} +query_request = { + "query": "Show me all conversations from today where users asked about pricing" +} - def create_sample_users(self): - """Create sample users for testing""" - sample_users = [ - {"username": "alice_smith", "email": "alice@example.com", "status": "active"}, - {"username": "bob_jones", "email": "bob@example.com", "status": "active"}, - {"username": "charlie_brown", "email": "charlie@example.com", "status": "inactive"}, - {"username": "diana_prince", "email": "diana@example.com", "status": "active"}, - {"username": "eve_wilson", "email": "eve@example.com", "status": "active"} - ] +response = requests.post( + "https://api.gibsonai.com/v1/-/query", + json=query_request, + headers={"Authorization": "Bearer agent_a_api_key"} +) - created_users = [] - for user in sample_users: - response = requests.post( - f"{self.base_url}/users", - json=user, - headers=self.headers - ) - if response.status_code == 201: - created_users.append(response.json()) - print(f"Created user: {user['username']}") +# Agent B: E-commerce agent querying product data +query_request = { + "query": "Find all products under $50 in electronics category" +} + +response = requests.post( + "https://api.gibsonai.com/v1/-/query", + json=query_request, + headers={"Authorization": "Bearer agent_b_api_key"} +) +``` - return created_users +### Using Different API Endpoints - def create_sample_profiles(self, users): - """Create sample user profiles""" - profiles = [ - {"first_name": "Alice", "last_name": "Smith", "bio": "Software engineer"}, - {"first_name": "Bob", "last_name": "Jones", "bio": "Product manager"}, - {"first_name": "Charlie", "last_name": "Brown", "bio": "Designer"}, - {"first_name": "Diana", "last_name": "Prince", "bio": "Data scientist"}, - {"first_name": "Eve", "last_name": "Wilson", "bio": "Marketing specialist"} - ] +```python +# Agent A accessing conversation data +response = requests.get( + "https://api.gibsonai.com/v1/-/conversation-history", + headers={"Authorization": "Bearer agent_a_api_key"} +) - for i, user in enumerate(users): - if i < len(profiles): - profile_data = profiles[i] - profile_data["user_id"] = user["id"] +# Agent B accessing product data +response = requests.get( + "https://api.gibsonai.com/v1/-/product-catalog", + headers={"Authorization": "Bearer agent_b_api_key"} +) - response = requests.post( - f"{self.base_url}/user-profiles", - json=profile_data, - headers=self.headers - ) - if response.status_code == 201: - print(f"Created profile for: {user['username']}") +# Creating new records for different agents +# Agent A creates new conversation +new_conversation = { + "user_id": "user123", + "message": "What are your pricing plans?", + "response": "We offer three plans: Basic, Pro, and Enterprise...", + "timestamp": "2024-01-15T10:30:00Z" +} - def create_sample_conversations(self, users): - """Create sample conversations for agent testing""" - conversation_templates = [ - { - "message": "Hello, I need help with my account", - "response": "I'd be happy to help with your account. What specifically do you need assistance with?" - }, - { - "message": "How do I reset my password?", - "response": "To reset your password, click on 'Forgot Password' on the login page and follow the instructions." - }, - { - "message": "I'm having trouble with payments", - "response": "I understand payment issues can be frustrating. Let me help you resolve this." - }, - { - "message": "Can you tell me about your features?", - "response": "I'd be happy to explain our features. We offer database management, natural language queries, and more." - } - ] +response = requests.post( + "https://api.gibsonai.com/v1/-/conversation-history", + json=new_conversation, + headers={"Authorization": "Bearer agent_a_api_key"} +) +``` - for user in users: - for template in conversation_templates: - conversation = { - "user_id": user["id"], - "agent_id": "agent_001", - "message": template["message"], - "response": template["response"], - "timestamp": datetime.now().isoformat() - } +## Use Cases - response = requests.post( - f"{self.base_url}/conversations", - json=conversation, - headers=self.headers - ) - if response.status_code == 201: - print(f"Created conversation for: {user['username']}") +### Agent Specialization - def create_sample_metrics(self): - """Create sample agent metrics""" - metrics = [ - {"agent_id": "agent_001", "metric_name": "response_time", "value": 0.5}, - {"agent_id": "agent_001", "metric_name": "satisfaction_score", "value": 4.5}, - {"agent_id": "agent_001", "metric_name": "conversations_handled", "value": 150}, - {"agent_id": "agent_002", "metric_name": "response_time", "value": 0.7}, - {"agent_id": "agent_002", "metric_name": "satisfaction_score", "value": 4.2}, - {"agent_id": "agent_002", "metric_name": "conversations_handled", "value": 120} - ] +Perfect for scenarios where different agents need different data: - for metric in metrics: - metric["recorded_at"] = datetime.now().isoformat() +- **Customer Service Agent**: Stores conversation history and user issues +- **E-commerce Agent**: Manages product catalog and purchase history +- **Content Agent**: Handles articles, media, and content recommendations +- **Analytics Agent**: Tracks metrics, performance, and insights - response = requests.post( - f"{self.base_url}/agent-metrics", - json=metric, - headers=self.headers - ) - if response.status_code == 201: - print(f"Created metric: {metric['metric_name']} for {metric['agent_id']}") +### Development Environments - def generate_all_test_data(self): - """Generate complete test dataset""" - print("Creating test data...") +Create separate databases for: - # Create users - users = self.create_sample_users() +- **Development**: Test new agent features +- **Staging**: Validate agent behavior before production +- **Production**: Live agent operations +- **Training**: Historical data for agent training - # Create profiles - self.create_sample_profiles(users) +### Agent Collaboration - # Create conversations - self.create_sample_conversations(users) +Enable agents to: - # Create metrics - self.create_sample_metrics() +- Share specific data through controlled interfaces +- Maintain their own specialized datasets +- Query each other's data when needed +- Maintain data consistency across workflows - print("Test data creation completed!") - return users -``` +## Schema Management -### Using Test Data in Agent Development +### Independent Schema Evolution ```python -class AgentTester: - def __init__(self, api_key): - self.api_key = api_key - self.base_url = "https://api.gibsonai.com/v1/-" - self.headers = {"Authorization": f"Bearer {api_key}"} +# Agent A evolving its schema +# gibson modify conversation_history "Add sentiment_score column to track user satisfaction" +# gibson modify user_sessions "Add session_duration and interaction_count fields" +# gibson code models +# gibson merge - def test_user_lookup(self): - """Test agent's ability to look up users""" - query_request = { - "query": "Show me all active users" - } +# Agent B evolving its schema independently +# gibson modify product_catalog "Add inventory_count and supplier_info columns" +# gibson modify user_preferences "Add notification_settings and purchase_history_summary" +# gibson code models +# gibson merge +``` - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +### Data Model Generation - if response.status_code == 200: - results = response.json() - print(f"Found {len(results)} active users") - return results - else: - print(f"Query failed: {response.status_code}") - return None +Each agent gets its own Python models: - def test_conversation_analysis(self): - """Test agent's ability to analyze conversations""" - query_request = { - "query": "Show me all conversations from the last 24 hours with user satisfaction scores" - } +- **SQLAlchemy Models**: For database operations +- **Pydantic Schemas**: For data validation +- **Independent Updates**: Models update independently per agent - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +## MCP Server Integration - if response.status_code == 200: - results = response.json() - print(f"Found {len(results)} recent conversations") - return results - else: - print(f"Query failed: {response.status_code}") - return None +Connect different AI tools to different databases: - def test_metrics_reporting(self): - """Test agent's ability to generate metrics reports""" - query_request = { - "query": "Calculate average response time and satisfaction score by agent" - } +- **Agent-Specific Access**: Each agent connects to its own database +- **Natural Language Operations**: Use natural language for database operations +- **Contextual Queries**: AI tools understand the specific agent context +- **Secure Separation**: Each agent operates within its own data boundaries - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) +## Benefits for Multi-Agent Applications - if response.status_code == 200: - results = response.json() - print("Agent metrics report generated") - return results - else: - print(f"Query failed: {response.status_code}") - return None -``` +- **Data Isolation**: Complete separation between agent data +- **Independent Development**: Agents can evolve independently +- **Specialized Schemas**: Each agent has optimal data structure +- **Scalable Architecture**: Add new agents without affecting existing ones +- **Natural Language Control**: Manage all databases using natural language +- **Automatic APIs**: Each agent gets its own REST API endpoints -### E-commerce Agent Test Data +## Getting Started -```python -def create_ecommerce_test_data(): - """Create test data for e-commerce agent""" +1. **Create Projects**: Set up separate GibsonAI projects for each agent +2. **Define Schemas**: Use natural language to create database schemas +3. **Generate Models**: Create Python models for each agent +4. **Connect Agents**: Connect each agent to its specific database +5. **Iterate**: Evolve each agent's database as needs change - # Create schema - # gibson modify products "Create products table with id, name, description, price, category, stock_quantity" - # gibson modify orders "Create orders table with id, customer_id, total, status, created_at" - # gibson modify order_items "Create order_items table with id, order_id, product_id, quantity, price" - # gibson modify customers "Create customers table with id, name, email, phone, address" - # gibson code models - # gibson merge - # Sample products - products = [ - {"name": "Laptop", "description": "High-performance laptop", "price": 999.99, "category": "Electronics", "stock_quantity": 50}, - {"name": "Mouse", "description": "Wireless mouse", "price": 29.99, "category": "Electronics", "stock_quantity": 200}, - {"name": "Coffee Mug", "description": "Ceramic coffee mug", "price": 12.99, "category": "Kitchen", "stock_quantity": 100}, - {"name": "Notebook", "description": "Spiral notebook", "price": 3.99, "category": "Office", "stock_quantity": 300} - ] +--- +title: Database environments for testing AI agent behavior +subtitle: Use isolated database environments to test and track AI agent behavior and performance +enableTableOfContents: true +updatedOn: '2025-01-08T00:00:00.000Z' +--- + +Use isolated database environments to test and track AI agent behavior and performance. Create separate database schemas for testing different agent configurations and compare results using natural language queries. - # Sample customers - customers = [ - {"name": "John Doe", "email": "john@example.com", "phone": "555-0123", "address": "123 Main St"}, - {"name": "Jane Smith", "email": "jane@example.com", "phone": "555-0456", "address": "456 Oak Ave"} - ] +## How it works - # Sample orders - orders = [ - {"customer_id": 1, "total": 1029.98, "status": "completed"}, - {"customer_id": 2, "total": 16.98, "status": "pending"} - ] +GibsonAI provides database environments where you can test different agent configurations, track their performance, and analyze behavior patterns. Create isolated databases for each test scenario and use natural language queries to analyze results. - return {"products": products, "customers": customers, "orders": orders} -``` + -## Testing Scenarios +MCP Integration -### Customer Service Agent Testing +Database Management -```python -def test_customer_service_scenarios(): - """Test customer service agent scenarios""" +CLI Tools - # Test user information lookup - query_request = { - "query": "Find customer information for user ID 123" - } + - # Test order history - query_request = { - "query": "Show order history for customer john@example.com" - } +## Key Features - # Test support ticket creation - ticket_data = { - "customer_id": 123, - "issue": "Password reset request", - "priority": "medium", - "status": "open" - } -``` +### Isolated Testing Environments -### Analytics Agent Testing +- **Separate Databases**: Create isolated databases for each agent test +- **Independent Schemas**: Independent database schemas for different experiments +- **Safe Testing**: Test agent behavior without affecting production data +- **Environment Comparison**: Compare results across different test environments -```python -def test_analytics_scenarios(): - """Test analytics agent scenarios""" +### Agent Performance Tracking - # Test user engagement metrics - query_request = { - "query": "Calculate user engagement metrics for the last 30 days" - } +- **Behavior Logging**: Track agent actions and decisions in structured format +- **Performance Metrics**: Store and analyze agent performance data +- **Response Tracking**: Log agent responses and their effectiveness +- **Error Monitoring**: Track errors and failure patterns - # Test conversation analysis - query_request = { - "query": "Analyze conversation sentiment and response times" - } +### Natural Language Analysis - # Test performance trends - query_request = { - "query": "Show agent performance trends over the last week" - } -``` +- **Query Testing Results**: Use natural language to analyze test results +- **Performance Comparison**: Compare agent performance across different scenarios +- **Behavior Analysis**: Analyze agent behavior patterns and trends +- **Results Reporting**: Generate reports on agent testing outcomes ## Use Cases @@ -7231,564 +7205,725 @@ def test_analytics_scenarios(): Perfect for: -- Creating realistic test environments -- Testing agent responses to various scenarios -- Validating agent performance with sample data -- Debugging agent database interactions +- Testing new agent features and capabilities +- Validating agent behavior in different scenarios +- Comparing different agent configurations +- Debugging agent issues and problems -### Schema Validation +### Performance Optimization Enable: -- Testing database schema with realistic data -- Validating data relationships and constraints -- Ensuring data integrity across operations -- Testing schema evolution with existing data +- Identifying performance bottlenecks +- Testing different optimization strategies +- Measuring impact of configuration changes +- Validating performance improvements -### Performance Testing +### Behavior Validation Support: -- Load testing with sample datasets -- Query performance validation -- Database optimization testing -- Scalability testing with realistic data volumes +- Ensuring agent responses are appropriate +- Testing edge cases and error handling +- Validating decision-making logic +- Confirming compliance with requirements -## Best Practices +## Implementation Examples -### Test Data Design +### Setting Up Agent Testing Environment -- **Realistic Data**: Create data that mimics real-world scenarios -- **Comprehensive Coverage**: Include edge cases and boundary conditions -- **Data Relationships**: Ensure proper relationships between tables -- **Data Variety**: Include different data types and formats +```python +# Using Gibson CLI to create agent testing database +# Create agent testing tables +# gibson modify agent_tests "Create agent_tests table with id, test_name, agent_config, environment, created_at" +# gibson modify agent_actions "Create agent_actions table with id, test_id, action_type, input_data, output_data, timestamp, duration" +# gibson modify agent_metrics "Create agent_metrics table with id, test_id, metric_name, value, timestamp" +# gibson modify test_results "Create test_results table with id, test_id, result_type, data, success, error_message" -### Testing Strategy +# Generate models and apply changes +# gibson code models +# gibson merge +``` -- **Incremental Testing**: Start with simple scenarios and build complexity -- **Automated Testing**: Create scripts for repeatable test data generation -- **Data Cleanup**: Clean up test data after testing -- **Version Control**: Track test data changes with schema evolution +### Agent Testing Framework -### Data Management +```python +import requests +import json +from datetime import datetime +import time -- **Separation**: Keep test data separate from production data -- **Documentation**: Document test data scenarios and purposes -- **Maintenance**: Regularly update test data to reflect schema changes -- **Security**: Ensure test data doesn't contain sensitive information +class AgentTester: + def __init__(self, api_key, environment="test"): + self.api_key = api_key + self.environment = environment + self.base_url = "https://api.gibsonai.com/v1/-" + self.headers = {"Authorization": f"Bearer {api_key}"} -## Gibson CLI Commands + def create_test(self, test_name, agent_config): + """Create a new agent test""" + test_data = { + "test_name": test_name, + "agent_config": agent_config, + "environment": self.environment, + "created_at": datetime.now().isoformat() + } -```bash -# Create database schema -gibson modify table_name "description of table structure" -gibson code models -gibson merge + response = requests.post( + f"{self.base_url}/agent-tests", + json=test_data, + headers=self.headers + ) -# Reset database for fresh testing -gibson forget last -gibson build datastore + if response.status_code == 201: + test_record = response.json() + print(f"Created test: {test_name}") + return test_record["id"] + else: + print(f"Failed to create test: {response.status_code}") + return None -# Generate models after schema changes -gibson code models -gibson code schemas -``` + def log_agent_action(self, test_id, action_type, input_data, output_data, duration): + """Log an agent action during testing""" + action_data = { + "test_id": test_id, + "action_type": action_type, + "input_data": input_data, + "output_data": output_data, + "timestamp": datetime.now().isoformat(), + "duration": duration + } -## Benefits for AI Agent Testing + response = requests.post( + f"{self.base_url}/agent-actions", + json=action_data, + headers=self.headers + ) -- **Rapid Setup**: Quick schema creation using natural language -- **Realistic Testing**: Create scenarios that mirror production usage -- **Flexible Data**: Easy to modify test data as needs change -- **Natural Queries**: Test agents with natural language database queries -- **Automated APIs**: Immediate access to REST APIs for data operations + if response.status_code == 201: + return response.json() + else: + print(f"Failed to log action: {response.status_code}") + return None -## Getting Started + def record_metric(self, test_id, metric_name, value): + """Record a performance metric""" + metric_data = { + "test_id": test_id, + "metric_name": metric_name, + "value": value, + "timestamp": datetime.now().isoformat() + } -1. **Define Schema**: Use natural language to create your database schema -2. **Generate Models**: Create Python models with Gibson CLI -3. **Create Test Data**: Populate database with sample data -4. **Test Agent Operations**: Validate agent functionality with test data -5. **Iterate and Improve**: Refine schema and data based on testing results + response = requests.post( + f"{self.base_url}/agent-metrics", + json=metric_data, + headers=self.headers + ) + + if response.status_code == 201: + return response.json() + else: + print(f"Failed to record metric: {response.status_code}") + return None + + def log_test_result(self, test_id, result_type, data, success, error_message=None): + """Log test result""" + result_data = { + "test_id": test_id, + "result_type": result_type, + "data": data, + "success": success, + "error_message": error_message + } + + response = requests.post( + f"{self.base_url}/test-results", + json=result_data, + headers=self.headers + ) + + if response.status_code == 201: + return response.json() + else: + print(f"Failed to log result: {response.status_code}") + return None +``` + +### Testing Different Agent Configurations + +```python +class AgentBehaviorTester: + def __init__(self, api_key): + self.tester = AgentTester(api_key) -Ready to create database schemas and test data for your AI agents? [Get started with GibsonAI](/get-started/signing-up). + def test_response_configurations(self): + """Test different agent response configurations""" + # Test Configuration A: Conservative responses + config_a = { + "response_style": "conservative", + "confidence_threshold": 0.8, + "escalation_enabled": True + } ---- -title: Database backends for data-driven AI agents -subtitle: Provide structured data storage and retrieval for AI agents with natural language database management -enableTableOfContents: true -updatedOn: '2025-01-08T00:00:00.000Z' ---- + test_a_id = self.tester.create_test("Conservative Response Test", config_a) -Provide structured data storage and retrieval for AI agents with natural language database management. Create and manage databases that AI agents can query and update using natural language commands. + # Test Configuration B: Assertive responses + config_b = { + "response_style": "assertive", + "confidence_threshold": 0.6, + "escalation_enabled": False + } - + test_b_id = self.tester.create_test("Assertive Response Test", config_b) -MCP Integration + # Run tests with same scenarios + test_scenarios = [ + {"user_input": "I need help with my order", "expected_action": "order_lookup"}, + {"user_input": "I want to cancel my subscription", "expected_action": "cancellation_process"}, + {"user_input": "This product is defective", "expected_action": "refund_process"} + ] -Database Management + for scenario in test_scenarios: + # Test Configuration A + self.run_test_scenario(test_a_id, scenario, config_a) -CLI Tools + # Test Configuration B + self.run_test_scenario(test_b_id, scenario, config_b) - + def run_test_scenario(self, test_id, scenario, config): + """Run a single test scenario""" + start_time = time.time() -## Key Features + # Simulate agent processing + try: + # Mock agent response based on configuration + if config["response_style"] == "conservative": + response = self.generate_conservative_response(scenario["user_input"]) + else: + response = self.generate_assertive_response(scenario["user_input"]) -### Natural Language Database Management + duration = time.time() - start_time -- **Schema Creation**: Create database schemas using natural language descriptions -- **Table Management**: Add, modify, and remove tables with simple prompts -- **Relationship Building**: Define relationships between tables using natural language -- **Data Type Handling**: Automatically select appropriate data types + # Log the action + self.tester.log_agent_action( + test_id, + "user_interaction", + scenario["user_input"], + response, + duration + ) -### Text-to-SQL Capabilities + # Record metrics + self.tester.record_metric(test_id, "response_time", duration) + self.tester.record_metric(test_id, "confidence_score", response.get("confidence", 0)) -- **Natural Language Queries**: Convert natural language to SQL queries -- **Safe Query Execution**: Execute queries safely with built-in protections -- **Result Formatting**: Format query results for agent consumption -- **Multi-table Queries**: Handle complex queries across multiple tables + # Log result + success = response.get("action") == scenario["expected_action"] + self.tester.log_test_result( + test_id, + "scenario_test", + {"scenario": scenario, "response": response}, + success + ) -## Implementation Examples + except Exception as e: + # Log error + self.tester.log_test_result( + test_id, + "scenario_test", + {"scenario": scenario, "error": str(e)}, + False, + str(e) + ) -### Creating Database Schema with Natural Language + def generate_conservative_response(self, user_input): + """Generate conservative agent response""" + # Mock conservative response logic + return { + "response": "I'd be happy to help you with that. Let me connect you with a specialist.", + "action": "escalate", + "confidence": 0.9 + } -```python -# Using Gibson CLI to create database schema for AI agents -# gibson modify user_profiles "Create a user profile table with name, email, preferences, and created_at" -# gibson modify user_actions "Create a user actions table that tracks user_id, action_type, timestamp, and metadata" -# gibson code models # Generate SQLAlchemy models -# gibson merge # Apply changes to database + def generate_assertive_response(self, user_input): + """Generate assertive agent response""" + # Mock assertive response logic + return { + "response": "I can help you with that right away. Let me process your request.", + "action": "direct_action", + "confidence": 0.7 + } ``` -### Text-to-SQL Query Examples +### Analyzing Test Results ```python -# Using Gibson Studio or API for text-to-SQL queries -import requests +class TestResultAnalyzer: + def __init__(self, api_key): + self.api_key = api_key + self.base_url = "https://api.gibsonai.com/v1/-" + self.headers = {"Authorization": f"Bearer {api_key}"} -# Query user data with natural language -query_request = { - "query": "Show me all users who signed up in the last 30 days" -} + def compare_test_performance(self, test_a_name, test_b_name): + """Compare performance between two tests""" + query_request = { + "query": f"Compare average response time and success rate between tests named '{test_a_name}' and '{test_b_name}'" + } -response = requests.post( - "https://api.gibsonai.com/v1/-/query", - json=query_request, - headers={"Authorization": "Bearer your_api_key"} -) + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) -results = response.json() -``` + if response.status_code == 200: + results = response.json() + print("Test Performance Comparison:") + for result in results: + print(f" {result}") + return results + else: + print(f"Analysis failed: {response.status_code}") + return None -### REST API Integration + def analyze_agent_behavior_patterns(self, test_id): + """Analyze agent behavior patterns in a test""" + query_request = { + "query": f"Analyze action types and response patterns for test ID {test_id}" + } -```python -# Using auto-generated REST APIs -import requests + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) -# Get all user profiles -response = requests.get( - "https://api.gibsonai.com/v1/-/user-profiles", - headers={"Authorization": "Bearer your_api_key"} -) + if response.status_code == 200: + results = response.json() + print(f"Behavior Analysis for Test {test_id}:") + for result in results: + print(f" {result}") + return results + else: + print(f"Analysis failed: {response.status_code}") + return None -# Create new user profile -new_profile = { - "name": "John Doe", - "email": "john@example.com", - "preferences": {"theme": "dark", "notifications": true} -} + def get_error_analysis(self, test_id): + """Get error analysis for a test""" + query_request = { + "query": f"Show all errors and failure patterns for test ID {test_id}" + } -response = requests.post( - "https://api.gibsonai.com/v1/-/user-profiles", - json=new_profile, - headers={"Authorization": "Bearer your_api_key"} -) -``` + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) -## Use Cases + if response.status_code == 200: + results = response.json() + print(f"Error Analysis for Test {test_id}:") + for result in results: + print(f" {result}") + return results + else: + print(f"Analysis failed: {response.status_code}") + return None -### Agent Data Storage + def generate_test_report(self, test_name): + """Generate comprehensive test report""" + query_request = { + "query": f"Generate a comprehensive report for test '{test_name}' including performance metrics, success rates, and error analysis" + } -Perfect for AI agents that need to: + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) -- Store user interactions and preferences -- Maintain conversation history -- Track agent performance metrics -- Store processed data from external sources + if response.status_code == 200: + results = response.json() + print(f"Test Report for {test_name}:") + for result in results: + print(f" {result}") + return results + else: + print(f"Report generation failed: {response.status_code}") + return None +``` -### Natural Language Data Access +### A/B Testing Example -Enable agents to: +```python +class ABTestingFramework: + def __init__(self, api_key): + self.tester = AgentTester(api_key) + self.analyzer = TestResultAnalyzer(api_key) -- Query databases using natural language -- Create reports from stored data -- Filter and search data based on user requests -- Generate insights from historical data + def run_ab_test(self, test_name, config_a, config_b, scenarios): + """Run A/B test with two configurations""" + + # Create tests for both configurations + test_a_id = self.tester.create_test(f"{test_name}_A", config_a) + test_b_id = self.tester.create_test(f"{test_name}_B", config_b) -### Schema Evolution + # Run scenarios for both configurations + for scenario in scenarios: + # Test Configuration A + self.run_scenario_test(test_a_id, scenario, config_a) -Allow agents to: + # Test Configuration B + self.run_scenario_test(test_b_id, scenario, config_b) -- Adapt database structure based on new requirements -- Add new data fields as needed -- Modify existing tables without manual intervention -- Maintain data integrity during changes + # Analyze results + print(f"\nA/B Test Results for {test_name}:") + self.analyzer.compare_test_performance(f"{test_name}_A", f"{test_name}_B") -## Gibson Studio Integration + return test_a_id, test_b_id -Use Gibson Studio for: + def run_scenario_test(self, test_id, scenario, config): + """Run a single scenario test""" + start_time = time.time() -- Visual database exploration -- Query building and testing -- Data visualization -- Schema management + try: + # Simulate agent processing based on configuration + response = self.simulate_agent_response(scenario, config) + duration = time.time() - start_time -## MCP Server Integration + # Log action + self.tester.log_agent_action( + test_id, + "scenario_test", + scenario, + response, + duration + ) -Connect AI tools and agents: + # Record metrics + self.tester.record_metric(test_id, "response_time", duration) + self.tester.record_metric(test_id, "confidence_score", response.get("confidence", 0)) -- Natural language database operations -- Secure database access -- Contextual query suggestions -- Automated schema updates + # Determine success + success = response.get("error") is None -## Benefits for AI Agents + # Log result + self.tester.log_test_result( + test_id, + "ab_test_scenario", + {"scenario": scenario, "response": response}, + success, + response.get("error") + ) -- **Rapid Development**: Create databases in minutes, not hours -- **Natural Interface**: Use natural language instead of SQL -- **Automatic APIs**: Get REST APIs without coding -- **Schema Flexibility**: Easily modify structure as needs change -- **Safe Operations**: Built-in protections for data integrity + except Exception as e: + # Log error + self.tester.log_test_result( + test_id, + "ab_test_scenario", + {"scenario": scenario, "error": str(e)}, + False, + str(e) + ) + def simulate_agent_response(self, scenario, config): + """Simulate agent response based on configuration""" + # Mock agent response logic + if config.get("response_style") == "detailed": + return { + "response": "I'll provide detailed help with your request...", + "confidence": 0.85, + "action": "detailed_response" + } + else: + return { + "response": "I can help with that.", + "confidence": 0.75, + "action": "brief_response" + } +``` ---- -title: Database schemas for AI knowledge management systems -subtitle: Create database schemas for storing and managing knowledge data in AI applications -enableTableOfContents: true -updatedOn: '2025-01-08T00:00:00.000Z' ---- +## Benefits for AI Agent Testing -Create database schemas for storing and managing knowledge data in AI applications. Use GibsonAI's natural language database management to set up structured data storage for documents, metadata, and knowledge bases. +### Comprehensive Testing -## How it works +- **Isolated Environments**: Test different configurations without interference +- **Structured Data**: Organized test data for easy analysis +- **Natural Language Analysis**: Query test results using natural language +- **Performance Tracking**: Track agent performance over time -GibsonAI provides database schema creation capabilities for AI knowledge management systems. While it doesn't provide vector storage or embeddings, it can create structured databases to store document metadata, knowledge base information, and other structured data that supports AI applications. +### Data-Driven Insights - +- **Behavior Analysis**: Analyze agent behavior patterns and trends +- **Performance Comparison**: Compare different agent configurations +- **Error Identification**: Identify and analyze error patterns +- **Optimization Guidance**: Data-driven insights for agent improvement -MCP Server Integration +### Scalable Testing -Database Management +- **Multiple Environments**: Test multiple configurations simultaneously +- **Flexible Schema**: Adapt database schema to different testing needs +- **API Integration**: Easy integration with existing testing workflows +- **Automated Analysis**: Automated analysis and reporting capabilities -Connect Your App +## Best Practices - +### Test Design -## Key Features +- **Clear Objectives**: Define clear testing objectives and success criteria +- **Realistic Scenarios**: Use realistic test scenarios that match production usage +- **Controlled Variables**: Control variables to isolate the impact of changes +- **Comprehensive Coverage**: Test edge cases and error scenarios -### Document Metadata Storage +### Data Management -- **Document Information**: Store document titles, descriptions, and metadata -- **Content Organization**: Organize documents by categories and topics -- **Version Tracking**: Track document versions and updates -- **Access Control**: Manage document access and permissions +- **Consistent Logging**: Log all relevant data consistently across tests +- **Data Quality**: Ensure high-quality test data for accurate analysis +- **Version Control**: Track changes to test configurations and scenarios +- **Data Retention**: Implement appropriate data retention policies -### Knowledge Base Management +### Analysis and Reporting -- **Structured Data**: Store knowledge base information in structured format -- **Relationships**: Define relationships between knowledge entities -- **Search Metadata**: Store metadata to support search functionality -- **Content References**: Maintain references to external content +- **Regular Analysis**: Regularly analyze test results for insights +- **Comparative Analysis**: Compare results across different configurations +- **Trend Analysis**: Track performance trends over time +- **Actionable Insights**: Focus on actionable insights for improvement -### AI Application Support +## Getting Started -- **Natural Language Queries**: Query knowledge data using natural language -- **REST APIs**: Auto-generated APIs for knowledge data access -- **Integration Support**: Easy integration with AI applications -- **Flexible Schema**: Adapt schema to different knowledge management needs +1. **Design Test Schema**: Define your agent testing database schema +2. **Create Test Environment**: Set up isolated database for testing +3. **Implement Testing Framework**: Create framework for logging test data +4. **Run Tests**: Execute tests with different agent configurations +5. **Analyze Results**: Use natural language queries to analyze results -## Implementation Examples +## Gibson CLI Commands -### Creating Knowledge Base Schema +```bash +# Create agent testing schema +gibson modify table_name "description of testing table" +gibson code models +gibson merge -```python -# Using Gibson CLI to create knowledge base schema -# Create document management tables -# gibson modify documents "Create documents table with id, title, description, content_type, file_path, category, created_at, updated_at" -# gibson modify document_metadata "Create document_metadata table with id, document_id, metadata_key, metadata_value" -# gibson modify categories "Create categories table with id, name, description, parent_id" -# gibson modify tags "Create tags table with id, name, description, color" -# gibson modify document_tags "Create document_tags table with id, document_id, tag_id" +# Generate models for testing integration +gibson code models +gibson code schemas -# Generate models and deploy -# gibson code models -# gibson merge +# Reset testing environment +gibson forget last +gibson build datastore ``` -### Document Management System - -```python -import requests -from datetime import datetime - -class DocumentManager: - def __init__(self, api_key): - self.api_key = api_key - self.base_url = "https://api.gibsonai.com/v1/-" - self.headers = {"Authorization": f"Bearer {api_key}"} +Ready to set up database environments for testing AI agent behavior? [Get started with GibsonAI](/get-started/signing-up). - def create_document(self, title, description, content_type, file_path, category_id): - """Create a new document record""" - document_data = { - "title": title, - "description": description, - "content_type": content_type, - "file_path": file_path, - "category": category_id, - "created_at": datetime.now().isoformat(), - "updated_at": datetime.now().isoformat() - } - response = requests.post( - f"{self.base_url}/documents", - json=document_data, - headers=self.headers - ) +--- +title: Database schemas and sample data for AI agent testing +subtitle: Create database schemas and populate with sample data for AI agent development and testing +enableTableOfContents: true +updatedOn: '2025-01-08T00:00:00.000Z' +--- - if response.status_code == 201: - document = response.json() - print(f"Document created: {title}") - return document - else: - print(f"Failed to create document: {response.status_code}") - return None +Create database schemas and populate with sample data for AI agent development and testing. Use GibsonAI's natural language database management to quickly set up test environments with realistic data structures. - def add_document_metadata(self, document_id, metadata_key, metadata_value): - """Add metadata to a document""" - metadata_data = { - "document_id": document_id, - "metadata_key": metadata_key, - "metadata_value": metadata_value - } + - response = requests.post( - f"{self.base_url}/document-metadata", - json=metadata_data, - headers=self.headers - ) +MCP Integration - if response.status_code == 201: - print(f"Metadata added: {metadata_key} = {metadata_value}") - return response.json() - else: - print(f"Failed to add metadata: {response.status_code}") - return None +Database Management - def tag_document(self, document_id, tag_id): - """Tag a document""" - tag_data = { - "document_id": document_id, - "tag_id": tag_id - } +CLI Tools - response = requests.post( - f"{self.base_url}/document-tags", - json=tag_data, - headers=self.headers - ) + - if response.status_code == 201: - print(f"Document tagged successfully") - return response.json() - else: - print(f"Failed to tag document: {response.status_code}") - return None +## Key Features - def search_documents(self, query): - """Search documents using natural language""" - search_request = { - "query": query - } +### Natural Language Schema Creation - response = requests.post( - f"{self.base_url}/query", - json=search_request, - headers=self.headers - ) +- **Table Definition**: Create tables using natural language descriptions +- **Relationship Building**: Define relationships between tables naturally +- **Data Type Selection**: Automatically choose appropriate data types +- **Index Creation**: Add indexes for performance optimization - if response.status_code == 200: - results = response.json() - print(f"Found {len(results)} documents") - return results - else: - print(f"Search failed: {response.status_code}") - return None -``` +### Sample Data Creation -### Knowledge Base Schema for AI Applications +- **Manual Data Entry**: Create sample records through REST APIs +- **Bulk Operations**: Insert multiple records at once +- **Realistic Scenarios**: Set up data that mirrors real-world use cases +- **Agent Testing**: Create data specifically for agent testing scenarios + +## Implementation Examples + +### Creating Test Database Schema ```python -# Create comprehensive knowledge base schema -# gibson modify knowledge_items "Create knowledge_items table with id, title, content_summary, source_type, source_reference, topic, created_at" -# gibson modify knowledge_relationships "Create knowledge_relationships table with id, source_item_id, target_item_id, relationship_type, strength" -# gibson modify topics "Create topics table with id, name, description, parent_topic_id" -# gibson modify content_sources "Create content_sources table with id, name, type, url, last_updated, status" -# gibson modify search_metadata "Create search_metadata table with id, knowledge_item_id, keywords, summary, relevance_score" +# Using Gibson CLI to create schema for agent testing +# Create user management system +# gibson modify users "Create users table with id, username, email, created_at, status" +# gibson modify user_profiles "Create user_profiles table with user_id, first_name, last_name, bio, avatar_url" +# gibson modify conversations "Create conversations table with id, user_id, agent_id, message, response, timestamp" +# gibson modify agent_metrics "Create agent_metrics table with agent_id, metric_name, value, recorded_at" -# Generate models and deploy +# Generate Python models # gibson code models # gibson merge ``` -### AI Application Integration +### Populating with Sample Data ```python -class AIKnowledgeManager: +import requests +import random +from datetime import datetime, timedelta + +class TestDataGenerator: def __init__(self, api_key): self.api_key = api_key self.base_url = "https://api.gibsonai.com/v1/-" self.headers = {"Authorization": f"Bearer {api_key}"} - def store_knowledge_item(self, title, content_summary, source_type, source_reference, topic): - """Store a knowledge item""" - knowledge_data = { - "title": title, - "content_summary": content_summary, - "source_type": source_type, - "source_reference": source_reference, - "topic": topic, - "created_at": datetime.now().isoformat() - } + def create_sample_users(self): + """Create sample users for testing""" + sample_users = [ + {"username": "alice_smith", "email": "alice@example.com", "status": "active"}, + {"username": "bob_jones", "email": "bob@example.com", "status": "active"}, + {"username": "charlie_brown", "email": "charlie@example.com", "status": "inactive"}, + {"username": "diana_prince", "email": "diana@example.com", "status": "active"}, + {"username": "eve_wilson", "email": "eve@example.com", "status": "active"} + ] - response = requests.post( - f"{self.base_url}/knowledge-items", - json=knowledge_data, - headers=self.headers - ) + created_users = [] + for user in sample_users: + response = requests.post( + f"{self.base_url}/users", + json=user, + headers=self.headers + ) + if response.status_code == 201: + created_users.append(response.json()) + print(f"Created user: {user['username']}") - if response.status_code == 201: - knowledge_item = response.json() - print(f"Knowledge item stored: {title}") - return knowledge_item - else: - print(f"Failed to store knowledge item: {response.status_code}") - return None + return created_users - def create_knowledge_relationship(self, source_item_id, target_item_id, relationship_type, strength=1.0): - """Create relationship between knowledge items""" - relationship_data = { - "source_item_id": source_item_id, - "target_item_id": target_item_id, - "relationship_type": relationship_type, - "strength": strength - } + def create_sample_profiles(self, users): + """Create sample user profiles""" + profiles = [ + {"first_name": "Alice", "last_name": "Smith", "bio": "Software engineer"}, + {"first_name": "Bob", "last_name": "Jones", "bio": "Product manager"}, + {"first_name": "Charlie", "last_name": "Brown", "bio": "Designer"}, + {"first_name": "Diana", "last_name": "Prince", "bio": "Data scientist"}, + {"first_name": "Eve", "last_name": "Wilson", "bio": "Marketing specialist"} + ] - response = requests.post( - f"{self.base_url}/knowledge-relationships", - json=relationship_data, - headers=self.headers - ) + for i, user in enumerate(users): + if i < len(profiles): + profile_data = profiles[i] + profile_data["user_id"] = user["id"] - if response.status_code == 201: - print(f"Relationship created: {relationship_type}") - return response.json() - else: - print(f"Failed to create relationship: {response.status_code}") - return None + response = requests.post( + f"{self.base_url}/user-profiles", + json=profile_data, + headers=self.headers + ) + if response.status_code == 201: + print(f"Created profile for: {user['username']}") - def add_search_metadata(self, knowledge_item_id, keywords, summary, relevance_score): - """Add search metadata to knowledge item""" - metadata_data = { - "knowledge_item_id": knowledge_item_id, - "keywords": keywords, - "summary": summary, - "relevance_score": relevance_score - } + def create_sample_conversations(self, users): + """Create sample conversations for agent testing""" + conversation_templates = [ + { + "message": "Hello, I need help with my account", + "response": "I'd be happy to help with your account. What specifically do you need assistance with?" + }, + { + "message": "How do I reset my password?", + "response": "To reset your password, click on 'Forgot Password' on the login page and follow the instructions." + }, + { + "message": "I'm having trouble with payments", + "response": "I understand payment issues can be frustrating. Let me help you resolve this." + }, + { + "message": "Can you tell me about your features?", + "response": "I'd be happy to explain our features. We offer database management, natural language queries, and more." + } + ] - response = requests.post( - f"{self.base_url}/search-metadata", - json=metadata_data, - headers=self.headers - ) + for user in users: + for template in conversation_templates: + conversation = { + "user_id": user["id"], + "agent_id": "agent_001", + "message": template["message"], + "response": template["response"], + "timestamp": datetime.now().isoformat() + } - if response.status_code == 201: - print("Search metadata added") - return response.json() - else: - print(f"Failed to add search metadata: {response.status_code}") - return None + response = requests.post( + f"{self.base_url}/conversations", + json=conversation, + headers=self.headers + ) + if response.status_code == 201: + print(f"Created conversation for: {user['username']}") - def query_knowledge_base(self, query): - """Query knowledge base using natural language""" - query_request = { - "query": query - } + def create_sample_metrics(self): + """Create sample agent metrics""" + metrics = [ + {"agent_id": "agent_001", "metric_name": "response_time", "value": 0.5}, + {"agent_id": "agent_001", "metric_name": "satisfaction_score", "value": 4.5}, + {"agent_id": "agent_001", "metric_name": "conversations_handled", "value": 150}, + {"agent_id": "agent_002", "metric_name": "response_time", "value": 0.7}, + {"agent_id": "agent_002", "metric_name": "satisfaction_score", "value": 4.2}, + {"agent_id": "agent_002", "metric_name": "conversations_handled", "value": 120} + ] - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) + for metric in metrics: + metric["recorded_at"] = datetime.now().isoformat() - if response.status_code == 200: - results = response.json() - print(f"Found {len(results)} knowledge items") - return results - else: - print(f"Query failed: {response.status_code}") - return None -``` + response = requests.post( + f"{self.base_url}/agent-metrics", + json=metric, + headers=self.headers + ) + if response.status_code == 201: + print(f"Created metric: {metric['metric_name']} for {metric['agent_id']}") -### Content Management Schema + def generate_all_test_data(self): + """Generate complete test dataset""" + print("Creating test data...") -```python -# Create content management schema for AI applications -# gibson modify content_items "Create content_items table with id, title, content_type, content_length, language, created_at, updated_at" -# gibson modify content_sections "Create content_sections table with id, content_item_id, section_title, section_order, content_preview" -# gibson modify content_references "Create content_references table with id, content_item_id, reference_type, reference_target, reference_context" -# gibson modify content_analytics "Create content_analytics table with id, content_item_id, view_count, search_count, relevance_score, last_accessed" + # Create users + users = self.create_sample_users() + + # Create profiles + self.create_sample_profiles(users) + + # Create conversations + self.create_sample_conversations(users) + + # Create metrics + self.create_sample_metrics() -# Generate models and deploy -# gibson code models -# gibson merge + print("Test data creation completed!") + return users ``` -### Query and Analytics +### Using Test Data in Agent Development ```python -class KnowledgeAnalytics: +class AgentTester: def __init__(self, api_key): self.api_key = api_key self.base_url = "https://api.gibsonai.com/v1/-" self.headers = {"Authorization": f"Bearer {api_key}"} - def get_popular_topics(self): - """Get most popular topics""" - query_request = { - "query": "Show the most popular topics based on knowledge item count and search frequency" - } - - response = requests.post( - f"{self.base_url}/query", - json=query_request, - headers=self.headers - ) - - if response.status_code == 200: - results = response.json() - print("Popular topics:") - for result in results: - print(f" {result}") - return results - else: - print(f"Query failed: {response.status_code}") - return None - - def analyze_content_performance(self): - """Analyze content performance""" + def test_user_lookup(self): + """Test agent's ability to look up users""" query_request = { - "query": "Show content performance metrics including view counts, search frequency, and relevance scores" + "query": "Show me all active users" } response = requests.post( @@ -7799,18 +7934,16 @@ class KnowledgeAnalytics: if response.status_code == 200: results = response.json() - print("Content performance analysis:") - for result in results: - print(f" {result}") + print(f"Found {len(results)} active users") return results else: print(f"Query failed: {response.status_code}") return None - def find_related_content(self, knowledge_item_id): - """Find related content based on relationships""" + def test_conversation_analysis(self): + """Test agent's ability to analyze conversations""" query_request = { - "query": f"Find all content items related to knowledge item {knowledge_item_id} through relationships" + "query": "Show me all conversations from the last 24 hours with user satisfaction scores" } response = requests.post( @@ -7821,16 +7954,16 @@ class KnowledgeAnalytics: if response.status_code == 200: results = response.json() - print(f"Found {len(results)} related content items") + print(f"Found {len(results)} recent conversations") return results else: print(f"Query failed: {response.status_code}") return None - def get_content_gaps(self): - """Identify content gaps in knowledge base""" + def test_metrics_reporting(self): + """Test agent's ability to generate metrics reports""" query_request = { - "query": "Identify topics with low content coverage or missing relationships" + "query": "Calculate average response time and satisfaction score by agent" } response = requests.post( @@ -7841,988 +7974,931 @@ class KnowledgeAnalytics: if response.status_code == 200: results = response.json() - print("Content gaps identified:") - for result in results: - print(f" {result}") + print("Agent metrics report generated") return results else: print(f"Query failed: {response.status_code}") return None ``` -## Use Cases - -### Document Management - -Perfect for: - -- Storing document metadata and information -- Organizing documents by categories and topics -- Tracking document versions and changes -- Managing document access and permissions - -### Knowledge Base Systems - -Enable: - -- Structured storage of knowledge information -- Relationship mapping between knowledge items -- Search metadata for improved discoverability -- Content analytics and performance tracking - -### AI Application Support - -Support: - -- Data storage for AI-powered applications -- Structured metadata for AI processing -- Integration with external AI services -- Analytics and reporting capabilities - -### Content Analytics - -Allow: - -- Tracking content performance and usage -- Identifying popular topics and trends -- Finding content gaps and opportunities -- Measuring content effectiveness - -## Benefits for AI Applications - -### Structured Data Management - -- **Organized Storage**: Well-structured storage for knowledge data -- **Relationship Mapping**: Define relationships between data entities -- **Flexible Schema**: Adapt to different knowledge management needs -- **Natural Language Queries**: Query data using natural language - -### Easy Integration - -- **REST APIs**: Auto-generated APIs for easy integration -- **MCP Server**: Connect AI tools through MCP server -- **Python Models**: Generated models for easy development -- **Documentation**: Comprehensive API documentation - -### Scalable Architecture - -- **Performance**: Optimized for knowledge management workloads -- **Flexibility**: Easy to modify schema as needs evolve -- **Analytics**: Built-in analytics and reporting capabilities -- **Search Support**: Metadata storage to support search functionality - -## Important Limitations - -### What GibsonAI Does NOT Provide - -- **Vector Storage**: No built-in vector database or embeddings storage -- **Similarity Search**: No semantic similarity search capabilities -- **Text Processing**: No automatic text processing or chunking -- **Embedding Generation**: No automatic embedding generation - -### External Integration Required - -For complete RAG implementations, you'll need: - -- **Vector Database**: Use external vector databases like Pinecone, Weaviate, or Chroma -- **Embedding Services**: Use OpenAI, Hugging Face, or other embedding services -- **Text Processing**: Implement text chunking and processing separately -- **Search Logic**: Implement semantic search logic in your application - -## Best Practices - -### Schema Design - -- **Clear Structure**: Design clear, logical database structure -- **Appropriate Relationships**: Define meaningful relationships between entities -- **Metadata Storage**: Store relevant metadata for search and analytics -- **Performance Optimization**: Consider query performance in schema design - -### Data Management - -- **Quality Control**: Maintain high-quality data for better AI performance -- **Consistent Formats**: Use consistent data formats across the system -- **Regular Updates**: Keep data current and relevant -- **Backup Strategy**: Implement proper backup and recovery procedures - -### Integration Strategy - -- **External Services**: Plan integration with external AI services -- **API Design**: Design APIs that work well with AI applications -- **Error Handling**: Implement robust error handling -- **Performance Monitoring**: Monitor system performance and usage - -## Getting Started - -1. **Design Schema**: Plan your knowledge management database structure -2. **Create Database**: Use Gibson CLI to create your database schema -3. **Generate Models**: Create Python models for integration -4. **Test Integration**: Test with sample data and queries -5. **Deploy**: Deploy your knowledge management system - -## Gibson CLI Commands - -```bash -# Create knowledge management schema -gibson modify table_name "description of knowledge table" -gibson code models -gibson merge - -# Generate models for integration -gibson code models -gibson code schemas -``` - -## Sample Schema Examples - -### Basic Document Management - -```python -# Simple document management schema -document_tables = { - "documents": "Create documents table with id, title, content_type, file_path, created_at", - "categories": "Create categories table with id, name, description", - "tags": "Create tags table with id, name, color", - "document_tags": "Create document_tags table with id, document_id, tag_id" -} -``` - -### Knowledge Base with Relationships +### E-commerce Agent Test Data ```python -# Knowledge base with relationships -knowledge_tables = { - "knowledge_items": "Create knowledge_items table with id, title, summary, topic, created_at", - "relationships": "Create relationships table with id, source_id, target_id, type, strength", - "topics": "Create topics table with id, name, description, parent_id", - "search_metadata": "Create search_metadata table with id, item_id, keywords, relevance_score" -} -``` - -Ready to create database schemas for your AI knowledge management system? [Get started with GibsonAI](/get-started/signing-up). - - ---- -title: AI-Driven Schema and Model Generation -subtitle: Generate database schemas and Pydantic/SQLAlchemy models with AI-powered natural language prompts -enableTableOfContents: true -updatedOn: '2025-01-08T00:00:00.000Z' ---- - -Generate database schemas and Pydantic/SQLAlchemy models with AI-powered natural language prompts. Keep your code and database in sync with automated model generation and schema management for seamless Python integration. - -## How it works - -When you describe your database needs in natural language, GibsonAI generates both the MySQL database schema and corresponding Pydantic schemas and SQLAlchemy models. Your code and database stay synchronized through automatic model generation and schema management. - - - -AI-powered generation - -Python integration - -Auto-sync - -Text-to-SQL +def create_ecommerce_test_data(): + """Create test data for e-commerce agent""" - + # Create schema + # gibson modify products "Create products table with id, name, description, price, category, stock_quantity" + # gibson modify orders "Create orders table with id, customer_id, total, status, created_at" + # gibson modify order_items "Create order_items table with id, order_id, product_id, quantity, price" + # gibson modify customers "Create customers table with id, name, email, phone, address" + # gibson code models + # gibson merge -## Key Features + # Sample products + products = [ + {"name": "Laptop", "description": "High-performance laptop", "price": 999.99, "category": "Electronics", "stock_quantity": 50}, + {"name": "Mouse", "description": "Wireless mouse", "price": 29.99, "category": "Electronics", "stock_quantity": 200}, + {"name": "Coffee Mug", "description": "Ceramic coffee mug", "price": 12.99, "category": "Kitchen", "stock_quantity": 100}, + {"name": "Notebook", "description": "Spiral notebook", "price": 3.99, "category": "Office", "stock_quantity": 300} + ] -### Natural Language Schema Generation + # Sample customers + customers = [ + {"name": "John Doe", "email": "john@example.com", "phone": "555-0123", "address": "123 Main St"}, + {"name": "Jane Smith", "email": "jane@example.com", "phone": "555-0456", "address": "456 Oak Ave"} + ] -- **Plain English Prompts**: Describe your database needs in simple language -- **Context Understanding**: AI understands relationships and constraints -- **Optimization Suggestions**: Get suggestions for better performance -- **Iterative Refinement**: Refine schemas through conversation + # Sample orders + orders = [ + {"customer_id": 1, "total": 1029.98, "status": "completed"}, + {"customer_id": 2, "total": 16.98, "status": "pending"} + ] -### Automatic Python Model Generation + return {"products": products, "customers": customers, "orders": orders} +``` -- **Pydantic Schemas**: Type-safe validation schemas for all tables -- **SQLAlchemy Models**: Complete ORM models with relationships -- **Code Synchronization**: Models automatically updated with schema changes -- **Python Integration**: Ready-to-use code for immediate integration +## Testing Scenarios -### Text-to-SQL Analysis +### Customer Service Agent Testing -- **Natural Language Queries**: Ask questions about your data -- **Gibson Studio**: Run generated SQL queries in the intuitive data management UI -- **Schema Analysis**: Analyze table relationships and data patterns -- **Performance Insights**: Get recommendations for optimization +```python +def test_customer_service_scenarios(): + """Test customer service agent scenarios""" -## Step-by-step guide + # Test user information lookup + query_request = { + "query": "Find customer information for user ID 123" + } -### 1. Generate schema with natural language + # Test order history + query_request = { + "query": "Show order history for customer john@example.com" + } -```bash -# Create a comprehensive schema -gibson modify "Create an e-commerce system with users, products, orders, and payments. Users can have multiple orders, each order contains multiple products, and each order has one payment record" + # Test support ticket creation + ticket_data = { + "customer_id": 123, + "issue": "Password reset request", + "priority": "medium", + "status": "open" + } ``` -### 2. Generate Python models +### Analytics Agent Testing -```bash -# Generate Pydantic schemas for validation -gibson code schemas +```python +def test_analytics_scenarios(): + """Test analytics agent scenarios""" -# Generate SQLAlchemy models for ORM -gibson code models + # Test user engagement metrics + query_request = { + "query": "Calculate user engagement metrics for the last 30 days" + } -# Generate all Python code -gibson code base + # Test conversation analysis + query_request = { + "query": "Analyze conversation sentiment and response times" + } + + # Test performance trends + query_request = { + "query": "Show agent performance trends over the last week" + } ``` -### 3. Explore with text-to-SQL +## Use Cases -Use Gibson Studio to analyze your generated schema: +### Agent Development -- "Show me the relationship between users and orders" -- "Which tables have foreign key constraints?" -- "What's the structure of the products table?" -- "Find any tables without primary keys" +Perfect for: -### 4. Access your data +- Creating realistic test environments +- Testing agent responses to various scenarios +- Validating agent performance with sample data +- Debugging agent database interactions -Integration options: +### Schema Validation -- **RESTful APIs**: Base URL `https://api.gibsonai.com` - - SQL queries: `/v1/-/query` - - Table operations: `/v1/-/[table-name-in-kebab-case]` -- **OpenAPI Spec**: Available in your project settings -- **Direct Connection**: Connection string available in the UI -- **API Documentation**: Available in the data API section +Enable: -## Example schema generation +- Testing database schema with realistic data +- Validating data relationships and constraints +- Ensuring data integrity across operations +- Testing schema evolution with existing data -### Natural language prompt +### Performance Testing -``` -"Create a blog system with users, posts, and comments. Users can write multiple posts, and each post can have multiple comments. Include timestamps, user roles, and post categories." -``` +Support: -### Generated MySQL schema +- Load testing with sample datasets +- Query performance validation +- Database optimization testing +- Scalability testing with realistic data volumes -```sql -CREATE TABLE users ( - id INT AUTO_INCREMENT PRIMARY KEY, - username VARCHAR(50) UNIQUE NOT NULL, - email VARCHAR(255) UNIQUE NOT NULL, - password_hash VARCHAR(255) NOT NULL, - role ENUM('user', 'admin') DEFAULT 'user', - created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, - updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP -); +## Best Practices -CREATE TABLE categories ( - id INT AUTO_INCREMENT PRIMARY KEY, - name VARCHAR(100) UNIQUE NOT NULL, - description TEXT, - created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP -); +### Test Data Design -CREATE TABLE posts ( - id INT AUTO_INCREMENT PRIMARY KEY, - title VARCHAR(255) NOT NULL, - content TEXT NOT NULL, - user_id INT NOT NULL, - category_id INT NOT NULL, - published BOOLEAN DEFAULT FALSE, - created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, - updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, - FOREIGN KEY (user_id) REFERENCES users(id), - FOREIGN KEY (category_id) REFERENCES categories(id) -); +- **Realistic Data**: Create data that mimics real-world scenarios +- **Comprehensive Coverage**: Include edge cases and boundary conditions +- **Data Relationships**: Ensure proper relationships between tables +- **Data Variety**: Include different data types and formats -CREATE TABLE comments ( - id INT AUTO_INCREMENT PRIMARY KEY, - content TEXT NOT NULL, - user_id INT NOT NULL, - post_id INT NOT NULL, - created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, - FOREIGN KEY (user_id) REFERENCES users(id), - FOREIGN KEY (post_id) REFERENCES posts(id) ON DELETE CASCADE -); -``` +### Testing Strategy -### Generated Pydantic schemas +- **Incremental Testing**: Start with simple scenarios and build complexity +- **Automated Testing**: Create scripts for repeatable test data generation +- **Data Cleanup**: Clean up test data after testing +- **Version Control**: Track test data changes with schema evolution -```python -from pydantic import BaseModel, EmailStr -from datetime import datetime -from typing import Optional -from enum import Enum +### Data Management -class UserRole(str, Enum): - USER = "user" - ADMIN = "admin" +- **Separation**: Keep test data separate from production data +- **Documentation**: Document test data scenarios and purposes +- **Maintenance**: Regularly update test data to reflect schema changes +- **Security**: Ensure test data doesn't contain sensitive information -class User(BaseModel): - id: Optional[int] = None - username: str - email: EmailStr - password_hash: str - role: UserRole = UserRole.USER - created_at: Optional[datetime] = None - updated_at: Optional[datetime] = None +## Gibson CLI Commands -class Category(BaseModel): - id: Optional[int] = None - name: str - description: Optional[str] = None - created_at: Optional[datetime] = None +```bash +# Create database schema +gibson modify table_name "description of table structure" +gibson code models +gibson merge -class Post(BaseModel): - id: Optional[int] = None - title: str - content: str - user_id: int - category_id: int - published: bool = False - created_at: Optional[datetime] = None - updated_at: Optional[datetime] = None +# Reset database for fresh testing +gibson forget last +gibson build datastore -class Comment(BaseModel): - id: Optional[int] = None - content: str - user_id: int - post_id: int - created_at: Optional[datetime] = None +# Generate models after schema changes +gibson code models +gibson code schemas ``` -### Generated SQLAlchemy models +## Benefits for AI Agent Testing -```python -from sqlalchemy import Column, Integer, String, Text, Boolean, DateTime, ForeignKey, Enum -from sqlalchemy.ext.declarative import declarative_base -from sqlalchemy.orm import relationship -from datetime import datetime -import enum +- **Rapid Setup**: Quick schema creation using natural language +- **Realistic Testing**: Create scenarios that mirror production usage +- **Flexible Data**: Easy to modify test data as needs change +- **Natural Queries**: Test agents with natural language database queries +- **Automated APIs**: Immediate access to REST APIs for data operations -Base = declarative_base() +## Getting Started -class UserRole(enum.Enum): - USER = "user" - ADMIN = "admin" +1. **Define Schema**: Use natural language to create your database schema +2. **Generate Models**: Create Python models with Gibson CLI +3. **Create Test Data**: Populate database with sample data +4. **Test Agent Operations**: Validate agent functionality with test data +5. **Iterate and Improve**: Refine schema and data based on testing results -class User(Base): - __tablename__ = 'users' +Ready to create database schemas and test data for your AI agents? [Get started with GibsonAI](/get-started/signing-up). - id = Column(Integer, primary_key=True, autoincrement=True) - username = Column(String(50), unique=True, nullable=False) - email = Column(String(255), unique=True, nullable=False) - password_hash = Column(String(255), nullable=False) - role = Column(Enum(UserRole), default=UserRole.USER) - created_at = Column(DateTime, default=datetime.utcnow) - updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) - posts = relationship("Post", back_populates="user") - comments = relationship("Comment", back_populates="user") +--- +title: Database backends for data-driven AI agents +subtitle: Provide structured data storage and retrieval for AI agents with natural language database management +enableTableOfContents: true +updatedOn: '2025-01-08T00:00:00.000Z' +--- + +Provide structured data storage and retrieval for AI agents with natural language database management. Create and manage databases that AI agents can query and update using natural language commands. + + -class Category(Base): - __tablename__ = 'categories' +MCP Integration - id = Column(Integer, primary_key=True, autoincrement=True) - name = Column(String(100), unique=True, nullable=False) - description = Column(Text) - created_at = Column(DateTime, default=datetime.utcnow) +Database Management - posts = relationship("Post", back_populates="category") +CLI Tools -class Post(Base): - __tablename__ = 'posts' + - id = Column(Integer, primary_key=True, autoincrement=True) - title = Column(String(255), nullable=False) - content = Column(Text, nullable=False) - user_id = Column(Integer, ForeignKey('users.id'), nullable=False) - category_id = Column(Integer, ForeignKey('categories.id'), nullable=False) - published = Column(Boolean, default=False) - created_at = Column(DateTime, default=datetime.utcnow) - updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) +## Key Features - user = relationship("User", back_populates="posts") - category = relationship("Category", back_populates="posts") - comments = relationship("Comment", back_populates="post", cascade="all, delete-orphan") +### Natural Language Database Management -class Comment(Base): - __tablename__ = 'comments' +- **Schema Creation**: Create database schemas using natural language descriptions +- **Table Management**: Add, modify, and remove tables with simple prompts +- **Relationship Building**: Define relationships between tables using natural language +- **Data Type Handling**: Automatically select appropriate data types - id = Column(Integer, primary_key=True, autoincrement=True) - content = Column(Text, nullable=False) - user_id = Column(Integer, ForeignKey('users.id'), nullable=False) - post_id = Column(Integer, ForeignKey('posts.id'), nullable=False) - created_at = Column(DateTime, default=datetime.utcnow) +### Text-to-SQL Capabilities - user = relationship("User", back_populates="comments") - post = relationship("Post", back_populates="comments") -``` +- **Natural Language Queries**: Convert natural language to SQL queries +- **Safe Query Execution**: Execute queries safely with built-in protections +- **Result Formatting**: Format query results for agent consumption +- **Multi-table Queries**: Handle complex queries across multiple tables -## Integration examples +## Implementation Examples -### Using Pydantic for validation +### Creating Database Schema with Natural Language ```python -from pydantic import ValidationError - -# Validate user input -try: - user_data = { - "username": "john_doe", - "email": "john@example.com", - "password_hash": "hashed_password", - "role": "admin" - } - user = User(**user_data) - print(f"Valid user: {user.username}") -except ValidationError as e: - print(f"Validation error: {e}") +# Using Gibson CLI to create database schema for AI agents +# gibson modify user_profiles "Create a user profile table with name, email, preferences, and created_at" +# gibson modify user_actions "Create a user actions table that tracks user_id, action_type, timestamp, and metadata" +# gibson code models # Generate SQLAlchemy models +# gibson merge # Apply changes to database ``` -### Using SQLAlchemy for database operations +### Text-to-SQL Query Examples ```python -from sqlalchemy import create_engine -from sqlalchemy.orm import sessionmaker +# Using Gibson Studio or API for text-to-SQL queries +import requests -# Use connection string from GibsonAI UI -engine = create_engine("your-connection-string-from-ui") -Session = sessionmaker(bind=engine) -session = Session() +# Query user data with natural language +query_request = { + "query": "Show me all users who signed up in the last 30 days" +} -# Create new user -new_user = User( - username="jane_doe", - email="jane@example.com", - password_hash="hashed_password", - role=UserRole.USER +response = requests.post( + "https://api.gibsonai.com/v1/-/query", + json=query_request, + headers={"Authorization": "Bearer your_api_key"} ) -session.add(new_user) -session.commit() -# Query with relationships -posts_with_authors = session.query(Post).join(User).filter( - User.role == UserRole.ADMIN -).all() +results = response.json() ``` -### Using RESTful APIs +### REST API Integration ```python +# Using auto-generated REST APIs import requests -# Create a new post -post_data = { - "title": "My First Post", - "content": "This is the content of my first post.", - "user_id": 1, - "category_id": 1, - "published": True -} +# Get all user profiles +response = requests.get( + "https://api.gibsonai.com/v1/-/user-profiles", + headers={"Authorization": "Bearer your_api_key"} +) -response = requests.post("https://api.gibsonai.com/v1/-/posts", json=post_data) -new_post = response.json() +# Create new user profile +new_profile = { + "name": "John Doe", + "email": "john@example.com", + "preferences": {"theme": "dark", "notifications": true} +} -# Get posts with filtering -response = requests.get("https://api.gibsonai.com/v1/-/posts?published=true&limit=10") -posts = response.json() +response = requests.post( + "https://api.gibsonai.com/v1/-/user-profiles", + json=new_profile, + headers={"Authorization": "Bearer your_api_key"} +) ``` -## Use cases +## Use Cases - +### Agent Data Storage -SQL to database +Perfect for AI agents that need to: -Unified API layer +- Store user interactions and preferences +- Maintain conversation history +- Track agent performance metrics +- Store processed data from external sources -Schema migrations +### Natural Language Data Access - +Enable agents to: -## What's next? +- Query databases using natural language +- Create reports from stored data +- Filter and search data based on user requests +- Generate insights from historical data - +### Schema Evolution + +Allow agents to: + +- Adapt database structure based on new requirements +- Add new data fields as needed +- Modify existing tables without manual intervention +- Maintain data integrity during changes + +## Gibson Studio Integration + +Use Gibson Studio for: + +- Visual database exploration +- Query building and testing +- Data visualization +- Schema management + +## MCP Server Integration + +Connect AI tools and agents: + +- Natural language database operations +- Secure database access +- Contextual query suggestions +- Automated schema updates + +## Benefits for AI Agents + +- **Rapid Development**: Create databases in minutes, not hours +- **Natural Interface**: Use natural language instead of SQL +- **Automatic APIs**: Get REST APIs without coding +- **Schema Flexibility**: Easily modify structure as needs change +- **Safe Operations**: Built-in protections for data integrity --- -title: Unified API layer for applications -subtitle: Create a unified API layer for your applications with GibsonAI's automatically generated RESTful APIs and Python models +title: Database schemas for AI knowledge management systems +subtitle: Create database schemas for storing and managing knowledge data in AI applications enableTableOfContents: true updatedOn: '2025-01-08T00:00:00.000Z' --- -Create a unified API layer for your applications with GibsonAI's automatically generated RESTful APIs and Python models. Simplify data access patterns by providing a consistent interface to your hosted MySQL database with Pydantic schemas and SQLAlchemy models. +Create database schemas for storing and managing knowledge data in AI applications. Use GibsonAI's natural language database management to set up structured data storage for documents, metadata, and knowledge bases. ## How it works -GibsonAI automatically generates RESTful APIs for your database schema, providing a unified interface for your applications. Each table in your schema gets full CRUD operations with pagination, filtering, and sorting support, along with corresponding Pydantic schemas and SQLAlchemy models for Python integration. +GibsonAI provides database schema creation capabilities for AI knowledge management systems. While it doesn't provide vector storage or embeddings, it can create structured databases to store document metadata, knowledge base information, and other structured data that supports AI applications. -Auto-generated APIs +MCP Server Integration -Python models +Database Management -Unified access +Connect Your App + + + +## Key Features + +### Document Metadata Storage + +- **Document Information**: Store document titles, descriptions, and metadata +- **Content Organization**: Organize documents by categories and topics +- **Version Tracking**: Track document versions and updates +- **Access Control**: Manage document access and permissions + +### Knowledge Base Management + +- **Structured Data**: Store knowledge base information in structured format +- **Relationships**: Define relationships between knowledge entities +- **Search Metadata**: Store metadata to support search functionality +- **Content References**: Maintain references to external content + +### AI Application Support + +- **Natural Language Queries**: Query knowledge data using natural language +- **REST APIs**: Auto-generated APIs for knowledge data access +- **Integration Support**: Easy integration with AI applications +- **Flexible Schema**: Adapt schema to different knowledge management needs + +## Implementation Examples + +### Creating Knowledge Base Schema + +```python +# Using Gibson CLI to create knowledge base schema +# Create document management tables +# gibson modify documents "Create documents table with id, title, description, content_type, file_path, category, created_at, updated_at" +# gibson modify document_metadata "Create document_metadata table with id, document_id, metadata_key, metadata_value" +# gibson modify categories "Create categories table with id, name, description, parent_id" +# gibson modify tags "Create tags table with id, name, description, color" +# gibson modify document_tags "Create document_tags table with id, document_id, tag_id" + +# Generate models and deploy +# gibson code models +# gibson merge +``` + +### Document Management System + +```python +import requests +from datetime import datetime + +class DocumentManager: + def __init__(self, api_key): + self.api_key = api_key + self.base_url = "https://api.gibsonai.com/v1/-" + self.headers = {"Authorization": f"Bearer {api_key}"} -Text-to-SQL + def create_document(self, title, description, content_type, file_path, category_id): + """Create a new document record""" + document_data = { + "title": title, + "description": description, + "content_type": content_type, + "file_path": file_path, + "category": category_id, + "created_at": datetime.now().isoformat(), + "updated_at": datetime.now().isoformat() + } - + response = requests.post( + f"{self.base_url}/documents", + json=document_data, + headers=self.headers + ) -## Key Features + if response.status_code == 201: + document = response.json() + print(f"Document created: {title}") + return document + else: + print(f"Failed to create document: {response.status_code}") + return None -### Automatic API Generation + def add_document_metadata(self, document_id, metadata_key, metadata_value): + """Add metadata to a document""" + metadata_data = { + "document_id": document_id, + "metadata_key": metadata_key, + "metadata_value": metadata_value + } -- **RESTful Endpoints**: Full CRUD operations for all database tables -- **Pagination Support**: Built-in pagination for large datasets -- **Filtering and Sorting**: Advanced filtering and sorting capabilities -- **OpenAPI Documentation**: Automatically generated API documentation + response = requests.post( + f"{self.base_url}/document-metadata", + json=metadata_data, + headers=self.headers + ) -### Python Integration + if response.status_code == 201: + print(f"Metadata added: {metadata_key} = {metadata_value}") + return response.json() + else: + print(f"Failed to add metadata: {response.status_code}") + return None -- **Pydantic Schemas**: Type-safe validation schemas for API requests/responses -- **SQLAlchemy Models**: ORM models for direct database operations -- **Code Generation**: Automatically generated Python code for integration -- **Framework Support**: Compatible with FastAPI, Flask, and other Python frameworks + def tag_document(self, document_id, tag_id): + """Tag a document""" + tag_data = { + "document_id": document_id, + "tag_id": tag_id + } -### Text-to-SQL Analysis + response = requests.post( + f"{self.base_url}/document-tags", + json=tag_data, + headers=self.headers + ) -- **Natural Language Queries**: Ask questions about your data -- **Gibson Studio**: Run generated SQL queries in the intuitive data management UI -- **API Analytics**: Analyze API usage patterns and performance + if response.status_code == 201: + print(f"Document tagged successfully") + return response.json() + else: + print(f"Failed to tag document: {response.status_code}") + return None -## Step-by-step guide + def search_documents(self, query): + """Search documents using natural language""" + search_request = { + "query": query + } -### 1. Generate your database schema + response = requests.post( + f"{self.base_url}/query", + json=search_request, + headers=self.headers + ) -```bash -# Create a comprehensive application schema -gibson modify "Create an application with users, posts, comments, and tags. Users can create posts, posts can have multiple tags, and users can comment on posts" + if response.status_code == 200: + results = response.json() + print(f"Found {len(results)} documents") + return results + else: + print(f"Search failed: {response.status_code}") + return None ``` -### 2. Generate Python models - -```bash -# Generate Pydantic schemas for API validation -gibson code schemas +### Knowledge Base Schema for AI Applications -# Generate SQLAlchemy models for database operations -gibson code models +```python +# Create comprehensive knowledge base schema +# gibson modify knowledge_items "Create knowledge_items table with id, title, content_summary, source_type, source_reference, topic, created_at" +# gibson modify knowledge_relationships "Create knowledge_relationships table with id, source_item_id, target_item_id, relationship_type, strength" +# gibson modify topics "Create topics table with id, name, description, parent_topic_id" +# gibson modify content_sources "Create content_sources table with id, name, type, url, last_updated, status" +# gibson modify search_metadata "Create search_metadata table with id, knowledge_item_id, keywords, summary, relevance_score" -# Generate all Python code -gibson code base +# Generate models and deploy +# gibson code models +# gibson merge ``` -### 3. Explore your API with text-to-SQL +### AI Application Integration -Use Gibson Studio to analyze your data and API usage: +```python +class AIKnowledgeManager: + def __init__(self, api_key): + self.api_key = api_key + self.base_url = "https://api.gibsonai.com/v1/-" + self.headers = {"Authorization": f"Bearer {api_key}"} -- "Show me the most popular posts by comment count" -- "Which users are most active in commenting?" -- "What are the trending tags this month?" -- "Find posts with no comments" + def store_knowledge_item(self, title, content_summary, source_type, source_reference, topic): + """Store a knowledge item""" + knowledge_data = { + "title": title, + "content_summary": content_summary, + "source_type": source_type, + "source_reference": source_reference, + "topic": topic, + "created_at": datetime.now().isoformat() + } -### 4. Access your unified API + response = requests.post( + f"{self.base_url}/knowledge-items", + json=knowledge_data, + headers=self.headers + ) -Your API is automatically available at: + if response.status_code == 201: + knowledge_item = response.json() + print(f"Knowledge item stored: {title}") + return knowledge_item + else: + print(f"Failed to store knowledge item: {response.status_code}") + return None -- **Base URL**: `https://api.gibsonai.com` -- **SQL Queries**: `/v1/-/query` -- **Table Operations**: `/v1/-/[table-name-in-kebab-case]` -- **OpenAPI Spec**: Available in your project settings -- **API Documentation**: Available in the data API section + def create_knowledge_relationship(self, source_item_id, target_item_id, relationship_type, strength=1.0): + """Create relationship between knowledge items""" + relationship_data = { + "source_item_id": source_item_id, + "target_item_id": target_item_id, + "relationship_type": relationship_type, + "strength": strength + } -## Example unified API layer + response = requests.post( + f"{self.base_url}/knowledge-relationships", + json=relationship_data, + headers=self.headers + ) -### Generated database schema + if response.status_code == 201: + print(f"Relationship created: {relationship_type}") + return response.json() + else: + print(f"Failed to create relationship: {response.status_code}") + return None -```sql --- Users table -CREATE TABLE users ( - id INT AUTO_INCREMENT PRIMARY KEY, - username VARCHAR(50) UNIQUE NOT NULL, - email VARCHAR(255) UNIQUE NOT NULL, - bio TEXT, - created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP -); + def add_search_metadata(self, knowledge_item_id, keywords, summary, relevance_score): + """Add search metadata to knowledge item""" + metadata_data = { + "knowledge_item_id": knowledge_item_id, + "keywords": keywords, + "summary": summary, + "relevance_score": relevance_score + } --- Posts table -CREATE TABLE posts ( - id INT AUTO_INCREMENT PRIMARY KEY, - title VARCHAR(255) NOT NULL, - content TEXT NOT NULL, - user_id INT NOT NULL, - published BOOLEAN DEFAULT FALSE, - created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, - FOREIGN KEY (user_id) REFERENCES users(id) -); + response = requests.post( + f"{self.base_url}/search-metadata", + json=metadata_data, + headers=self.headers + ) --- Comments table -CREATE TABLE comments ( - id INT AUTO_INCREMENT PRIMARY KEY, - content TEXT NOT NULL, - user_id INT NOT NULL, - post_id INT NOT NULL, - created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, - FOREIGN KEY (user_id) REFERENCES users(id), - FOREIGN KEY (post_id) REFERENCES posts(id) -); + if response.status_code == 201: + print("Search metadata added") + return response.json() + else: + print(f"Failed to add search metadata: {response.status_code}") + return None --- Tags table -CREATE TABLE tags ( - id INT AUTO_INCREMENT PRIMARY KEY, - name VARCHAR(100) UNIQUE NOT NULL, - description TEXT -); + def query_knowledge_base(self, query): + """Query knowledge base using natural language""" + query_request = { + "query": query + } --- Post-tags relationship -CREATE TABLE post_tags ( - post_id INT NOT NULL, - tag_id INT NOT NULL, - PRIMARY KEY (post_id, tag_id), - FOREIGN KEY (post_id) REFERENCES posts(id), - FOREIGN KEY (tag_id) REFERENCES tags(id) -); + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) + + if response.status_code == 200: + results = response.json() + print(f"Found {len(results)} knowledge items") + return results + else: + print(f"Query failed: {response.status_code}") + return None ``` -### Generated Pydantic schemas +### Content Management Schema ```python -from pydantic import BaseModel, EmailStr -from datetime import datetime -from typing import Optional, List +# Create content management schema for AI applications +# gibson modify content_items "Create content_items table with id, title, content_type, content_length, language, created_at, updated_at" +# gibson modify content_sections "Create content_sections table with id, content_item_id, section_title, section_order, content_preview" +# gibson modify content_references "Create content_references table with id, content_item_id, reference_type, reference_target, reference_context" +# gibson modify content_analytics "Create content_analytics table with id, content_item_id, view_count, search_count, relevance_score, last_accessed" -class User(BaseModel): - id: Optional[int] = None - username: str - email: EmailStr - bio: Optional[str] = None - created_at: Optional[datetime] = None +# Generate models and deploy +# gibson code models +# gibson merge +``` -class Post(BaseModel): - id: Optional[int] = None - title: str - content: str - user_id: int - published: bool = False - created_at: Optional[datetime] = None +### Query and Analytics -class Comment(BaseModel): - id: Optional[int] = None - content: str - user_id: int - post_id: int - created_at: Optional[datetime] = None +```python +class KnowledgeAnalytics: + def __init__(self, api_key): + self.api_key = api_key + self.base_url = "https://api.gibsonai.com/v1/-" + self.headers = {"Authorization": f"Bearer {api_key}"} -class Tag(BaseModel): - id: Optional[int] = None - name: str - description: Optional[str] = None + def get_popular_topics(self): + """Get most popular topics""" + query_request = { + "query": "Show the most popular topics based on knowledge item count and search frequency" + } -class PostTag(BaseModel): - post_id: int - tag_id: int -``` + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) -### Generated SQLAlchemy models + if response.status_code == 200: + results = response.json() + print("Popular topics:") + for result in results: + print(f" {result}") + return results + else: + print(f"Query failed: {response.status_code}") + return None -```python -from sqlalchemy import Column, Integer, String, Text, Boolean, DateTime, ForeignKey, Table -from sqlalchemy.ext.declarative import declarative_base -from sqlalchemy.orm import relationship -from datetime import datetime + def analyze_content_performance(self): + """Analyze content performance""" + query_request = { + "query": "Show content performance metrics including view counts, search frequency, and relevance scores" + } -Base = declarative_base() + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) -# Association table for many-to-many relationship -post_tags = Table('post_tags', Base.metadata, - Column('post_id', Integer, ForeignKey('posts.id'), primary_key=True), - Column('tag_id', Integer, ForeignKey('tags.id'), primary_key=True) -) + if response.status_code == 200: + results = response.json() + print("Content performance analysis:") + for result in results: + print(f" {result}") + return results + else: + print(f"Query failed: {response.status_code}") + return None -class User(Base): - __tablename__ = 'users' + def find_related_content(self, knowledge_item_id): + """Find related content based on relationships""" + query_request = { + "query": f"Find all content items related to knowledge item {knowledge_item_id} through relationships" + } - id = Column(Integer, primary_key=True, autoincrement=True) - username = Column(String(50), unique=True, nullable=False) - email = Column(String(255), unique=True, nullable=False) - bio = Column(Text) - created_at = Column(DateTime, default=datetime.utcnow) + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) - posts = relationship("Post", back_populates="user") - comments = relationship("Comment", back_populates="user") + if response.status_code == 200: + results = response.json() + print(f"Found {len(results)} related content items") + return results + else: + print(f"Query failed: {response.status_code}") + return None -class Post(Base): - __tablename__ = 'posts' + def get_content_gaps(self): + """Identify content gaps in knowledge base""" + query_request = { + "query": "Identify topics with low content coverage or missing relationships" + } - id = Column(Integer, primary_key=True, autoincrement=True) - title = Column(String(255), nullable=False) - content = Column(Text, nullable=False) - user_id = Column(Integer, ForeignKey('users.id'), nullable=False) - published = Column(Boolean, default=False) - created_at = Column(DateTime, default=datetime.utcnow) + response = requests.post( + f"{self.base_url}/query", + json=query_request, + headers=self.headers + ) - user = relationship("User", back_populates="posts") - comments = relationship("Comment", back_populates="post") - tags = relationship("Tag", secondary=post_tags, back_populates="posts") + if response.status_code == 200: + results = response.json() + print("Content gaps identified:") + for result in results: + print(f" {result}") + return results + else: + print(f"Query failed: {response.status_code}") + return None +``` -class Comment(Base): - __tablename__ = 'comments' +## Use Cases - id = Column(Integer, primary_key=True, autoincrement=True) - content = Column(Text, nullable=False) - user_id = Column(Integer, ForeignKey('users.id'), nullable=False) - post_id = Column(Integer, ForeignKey('posts.id'), nullable=False) - created_at = Column(DateTime, default=datetime.utcnow) +### Document Management - user = relationship("User", back_populates="comments") - post = relationship("Post", back_populates="comments") +Perfect for: -class Tag(Base): - __tablename__ = 'tags' +- Storing document metadata and information +- Organizing documents by categories and topics +- Tracking document versions and changes +- Managing document access and permissions - id = Column(Integer, primary_key=True, autoincrement=True) - name = Column(String(100), unique=True, nullable=False) - description = Column(Text) +### Knowledge Base Systems - posts = relationship("Post", secondary=post_tags, back_populates="tags") -``` +Enable: -## API integration examples +- Structured storage of knowledge information +- Relationship mapping between knowledge items +- Search metadata for improved discoverability +- Content analytics and performance tracking -### Using RESTful APIs +### AI Application Support -```python -import requests +Support: -# Create a new user -user_data = { - "username": "john_doe", - "email": "john@example.com", - "bio": "Software developer" -} -response = requests.post("https://api.gibsonai.com/v1/-/users", json=user_data) -new_user = response.json() +- Data storage for AI-powered applications +- Structured metadata for AI processing +- Integration with external AI services +- Analytics and reporting capabilities -# Get all posts with pagination -response = requests.get("https://api.gibsonai.com/v1/-/posts?page=1&limit=10") -posts = response.json() +### Content Analytics -# Create a new post -post_data = { - "title": "My First Post", - "content": "This is my first blog post!", - "user_id": new_user["id"], - "published": True -} -response = requests.post("https://api.gibsonai.com/v1/-/posts", json=post_data) -new_post = response.json() +Allow: -# Add tags to the post -tag_data = {"name": "technology", "description": "Tech-related posts"} -response = requests.post("https://api.gibsonai.com/v1/-/tags", json=tag_data) -tag = response.json() +- Tracking content performance and usage +- Identifying popular topics and trends +- Finding content gaps and opportunities +- Measuring content effectiveness -# Associate tag with post -post_tag_data = {"post_id": new_post["id"], "tag_id": tag["id"]} -response = requests.post("https://api.gibsonai.com/v1/-/post-tags", json=post_tag_data) -``` +## Benefits for AI Applications -### Using direct SQL queries +### Structured Data Management -```python -# Complex queries using text-to-SQL -query = """ -SELECT p.title, p.content, u.username, COUNT(c.id) as comment_count -FROM posts p -JOIN users u ON p.user_id = u.id -LEFT JOIN comments c ON p.id = c.post_id -WHERE p.published = true -GROUP BY p.id, p.title, p.content, u.username -ORDER BY comment_count DESC -LIMIT 10 -""" +- **Organized Storage**: Well-structured storage for knowledge data +- **Relationship Mapping**: Define relationships between data entities +- **Flexible Schema**: Adapt to different knowledge management needs +- **Natural Language Queries**: Query data using natural language + +### Easy Integration + +- **REST APIs**: Auto-generated APIs for easy integration +- **MCP Server**: Connect AI tools through MCP server +- **Python Models**: Generated models for easy development +- **Documentation**: Comprehensive API documentation -response = requests.post("https://api.gibsonai.com/v1/-/query", json={"query": query}) -popular_posts = response.json() -``` +### Scalable Architecture -### Using SQLAlchemy models +- **Performance**: Optimized for knowledge management workloads +- **Flexibility**: Easy to modify schema as needs evolve +- **Analytics**: Built-in analytics and reporting capabilities +- **Search Support**: Metadata storage to support search functionality -```python -from sqlalchemy import create_engine, func -from sqlalchemy.orm import sessionmaker +## Important Limitations -# Use connection string from GibsonAI UI -engine = create_engine("your-connection-string-from-ui") -Session = sessionmaker(bind=engine) -session = Session() +### What GibsonAI Does NOT Provide -# Complex queries using SQLAlchemy -popular_posts = session.query( - Post.title, - Post.content, - User.username, - func.count(Comment.id).label('comment_count') -).join(User).outerjoin(Comment).filter( - Post.published -).group_by(Post.id).order_by( - func.count(Comment.id).desc() -).limit(10).all() +- **Vector Storage**: No built-in vector database or embeddings storage +- **Similarity Search**: No semantic similarity search capabilities +- **Text Processing**: No automatic text processing or chunking +- **Embedding Generation**: No automatic embedding generation -# Query with relationships -user_with_posts = session.query(User).filter( - User.username == 'john_doe' -).first() -user_posts = user_with_posts.posts -``` +### External Integration Required -## Advanced API features +For complete RAG implementations, you'll need: -### Filtering and pagination +- **Vector Database**: Use external vector databases like Pinecone, Weaviate, or Chroma +- **Embedding Services**: Use OpenAI, Hugging Face, or other embedding services +- **Text Processing**: Implement text chunking and processing separately +- **Search Logic**: Implement semantic search logic in your application -```python -# Filter posts by published status with pagination -response = requests.get("https://api.gibsonai.com/v1/-/posts", params={ - "published": "true", - "page": 1, - "limit": 5, - "sort": "created_at", - "order": "desc" -}) -filtered_posts = response.json() +## Best Practices -# Filter users by username pattern -response = requests.get("https://api.gibsonai.com/v1/-/users", params={ - "username__like": "john%" -}) -matching_users = response.json() -``` +### Schema Design -### Relationship queries +- **Clear Structure**: Design clear, logical database structure +- **Appropriate Relationships**: Define meaningful relationships between entities +- **Metadata Storage**: Store relevant metadata for search and analytics +- **Performance Optimization**: Consider query performance in schema design -```python -# Get user with their posts and comments -user_id = 1 -response = requests.get(f"https://api.gibsonai.com/v1/-/users/{user_id}", params={ - "include": "posts,comments" -}) -user_with_relations = response.json() +### Data Management -# Get posts with their tags -response = requests.get("https://api.gibsonai.com/v1/-/posts", params={ - "include": "tags,user" -}) -posts_with_tags = response.json() -``` +- **Quality Control**: Maintain high-quality data for better AI performance +- **Consistent Formats**: Use consistent data formats across the system +- **Regular Updates**: Keep data current and relevant +- **Backup Strategy**: Implement proper backup and recovery procedures -## Text-to-SQL analysis examples +### Integration Strategy -### Popular content analysis +- **External Services**: Plan integration with external AI services +- **API Design**: Design APIs that work well with AI applications +- **Error Handling**: Implement robust error handling +- **Performance Monitoring**: Monitor system performance and usage -```sql --- Generated from: "Show me the most popular tags by post count" -SELECT - t.name, - t.description, - COUNT(pt.post_id) as post_count -FROM tags t -LEFT JOIN post_tags pt ON t.id = pt.tag_id -GROUP BY t.id, t.name, t.description -ORDER BY post_count DESC -LIMIT 10; -``` +## Getting Started -### User engagement metrics +1. **Design Schema**: Plan your knowledge management database structure +2. **Create Database**: Use Gibson CLI to create your database schema +3. **Generate Models**: Create Python models for integration +4. **Test Integration**: Test with sample data and queries +5. **Deploy**: Deploy your knowledge management system -```sql --- Generated from: "Find users with the highest engagement" -SELECT - u.username, - u.email, - COUNT(DISTINCT p.id) as post_count, - COUNT(DISTINCT c.id) as comment_count, - COUNT(DISTINCT p.id) + COUNT(DISTINCT c.id) as total_activity -FROM users u -LEFT JOIN posts p ON u.id = p.user_id -LEFT JOIN comments c ON u.id = c.user_id -GROUP BY u.id, u.username, u.email -ORDER BY total_activity DESC -LIMIT 10; -``` +## Gibson CLI Commands -## Use cases +```bash +# Create knowledge management schema +gibson modify table_name "description of knowledge table" +gibson code models +gibson merge - +# Generate models for integration +gibson code models +gibson code schemas +``` -AI-driven schema generation +## Sample Schema Examples -SQL to database +### Basic Document Management -Schema migrations +```python +# Simple document management schema +document_tables = { + "documents": "Create documents table with id, title, content_type, file_path, created_at", + "categories": "Create categories table with id, name, description", + "tags": "Create tags table with id, name, color", + "document_tags": "Create document_tags table with id, document_id, tag_id" +} +``` - +### Knowledge Base with Relationships -## What's next? +```python +# Knowledge base with relationships +knowledge_tables = { + "knowledge_items": "Create knowledge_items table with id, title, summary, topic, created_at", + "relationships": "Create relationships table with id, source_id, target_id, type, strength", + "topics": "Create topics table with id, name, description, parent_id", + "search_metadata": "Create search_metadata table with id, item_id, keywords, relevance_score" +} +``` - +Ready to create database schemas for your AI knowledge management system? [Get started with GibsonAI](/get-started/signing-up). --- -title: Prompt-Driven Schema Generation for RAG Workflows -subtitle: Generate database schemas for RAG applications using natural language prompts with Pydantic and SQLAlchemy models +title: AI-Driven Schema and Model Generation +subtitle: Generate database schemas and Pydantic/SQLAlchemy models with AI-powered natural language prompts enableTableOfContents: true updatedOn: '2025-01-08T00:00:00.000Z' --- -Generate database schemas for RAG applications using natural language prompts with Pydantic and SQLAlchemy models. Create optimized schemas for vector storage and retrieval workflows with AI-powered schema generation tailored for AI applications. +Generate database schemas and Pydantic/SQLAlchemy models with AI-powered natural language prompts. Keep your code and database in sync with automated model generation and schema management for seamless Python integration. ## How it works -Describe your RAG use case in natural language, and GibsonAI generates an optimized MySQL database schema with vector storage capabilities and retrieval optimization. The system automatically creates Pydantic schemas and SQLAlchemy models for seamless Python integration. +When you describe your database needs in natural language, GibsonAI generates both the MySQL database schema and corresponding Pydantic schemas and SQLAlchemy models. Your code and database stay synchronized through automatic model generation and schema management. -AI-optimized schemas +AI-powered generation -Vector storage +Python integration -Python integration +Auto-sync -Text-to-SQL +Text-to-SQL ## Key Features -### RAG-Optimized Schema Generation +### Natural Language Schema Generation -- **Vector Storage**: Optimized tables for storing embeddings and vectors -- **Document Management**: Efficient document storage and indexing -- **Metadata Handling**: Structured metadata storage for enhanced retrieval -- **Relationship Modeling**: Proper relationships between documents, chunks, and vectors +- **Plain English Prompts**: Describe your database needs in simple language +- **Context Understanding**: AI understands relationships and constraints +- **Optimization Suggestions**: Get suggestions for better performance +- **Iterative Refinement**: Refine schemas through conversation -### Python AI Integration +### Automatic Python Model Generation -- **Pydantic Schemas**: Type-safe models for data validation -- **SQLAlchemy Models**: ORM models for database operations -- **API Generation**: RESTful APIs for vector operations -- **Embedding Support**: Optimized storage for AI embeddings +- **Pydantic Schemas**: Type-safe validation schemas for all tables +- **SQLAlchemy Models**: Complete ORM models with relationships +- **Code Synchronization**: Models automatically updated with schema changes +- **Python Integration**: Ready-to-use code for immediate integration ### Text-to-SQL Analysis -- **Natural Language Queries**: Ask questions about your RAG data +- **Natural Language Queries**: Ask questions about your data - **Gibson Studio**: Run generated SQL queries in the intuitive data management UI -- **Vector Analysis**: Analyze vector similarity and document relationships -- **Performance Insights**: Optimize RAG query performance +- **Schema Analysis**: Analyze table relationships and data patterns +- **Performance Insights**: Get recommendations for optimization ## Step-by-step guide -### 1. Generate RAG schema with natural language +### 1. Generate schema with natural language ```bash -# Create a schema for document-based RAG -gibson modify documents "Create a RAG system with documents table containing title, content, and metadata, a chunks table for text segments with embeddings, and a queries table for tracking search queries" +# Create a comprehensive schema +gibson modify "Create an e-commerce system with users, products, orders, and payments. Users can have multiple orders, each order contains multiple products, and each order has one payment record" ``` ### 2. Generate Python models @@ -8831,23 +8907,23 @@ gibson modify documents "Create a RAG system with documents table containing tit # Generate Pydantic schemas for validation gibson code schemas -# Generate SQLAlchemy models for database operations +# Generate SQLAlchemy models for ORM gibson code models # Generate all Python code gibson code base ``` -### 3. Explore RAG data with text-to-SQL +### 3. Explore with text-to-SQL -Use Gibson Studio to analyze your RAG system: +Use Gibson Studio to analyze your generated schema: -- "Show me the most frequently queried documents" -- "Find documents with similar embeddings to a specific vector" -- "What's the average chunk size across all documents?" -- "Find documents that haven't been queried in the last month" +- "Show me the relationship between users and orders" +- "Which tables have foreign key constraints?" +- "What's the structure of the products table?" +- "Find any tables without primary keys" -### 4. Access your RAG data +### 4. Access your data Integration options: @@ -8858,175 +8934,196 @@ Integration options: - **Direct Connection**: Connection string available in the UI - **API Documentation**: Available in the data API section -## Example RAG schema +## Example schema generation -### Generated database schema +### Natural language prompt + +``` +"Create a blog system with users, posts, and comments. Users can write multiple posts, and each post can have multiple comments. Include timestamps, user roles, and post categories." +``` + +### Generated MySQL schema ```sql --- Documents table for storing source documents -CREATE TABLE documents ( +CREATE TABLE users ( id INT AUTO_INCREMENT PRIMARY KEY, - title VARCHAR(255) NOT NULL, - content TEXT NOT NULL, - metadata JSON, - source_url VARCHAR(500), + username VARCHAR(50) UNIQUE NOT NULL, + email VARCHAR(255) UNIQUE NOT NULL, + password_hash VARCHAR(255) NOT NULL, + role ENUM('user', 'admin') DEFAULT 'user', created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP ); --- Chunks table for storing text segments -CREATE TABLE chunks ( +CREATE TABLE categories ( id INT AUTO_INCREMENT PRIMARY KEY, - document_id INT NOT NULL, + name VARCHAR(100) UNIQUE NOT NULL, + description TEXT, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP +); + +CREATE TABLE posts ( + id INT AUTO_INCREMENT PRIMARY KEY, + title VARCHAR(255) NOT NULL, content TEXT NOT NULL, - embedding JSON, - chunk_index INT NOT NULL, - token_count INT, + user_id INT NOT NULL, + category_id INT NOT NULL, + published BOOLEAN DEFAULT FALSE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, - FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE + updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, + FOREIGN KEY (user_id) REFERENCES users(id), + FOREIGN KEY (category_id) REFERENCES categories(id) ); --- Queries table for tracking search queries -CREATE TABLE queries ( +CREATE TABLE comments ( id INT AUTO_INCREMENT PRIMARY KEY, - query_text TEXT NOT NULL, - query_embedding JSON, - results_found INT DEFAULT 0, - created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP + content TEXT NOT NULL, + user_id INT NOT NULL, + post_id INT NOT NULL, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, + FOREIGN KEY (user_id) REFERENCES users(id), + FOREIGN KEY (post_id) REFERENCES posts(id) ON DELETE CASCADE ); ``` ### Generated Pydantic schemas ```python -from pydantic import BaseModel +from pydantic import BaseModel, EmailStr from datetime import datetime -from typing import Optional, Dict, Any, List +from typing import Optional +from enum import Enum -class Document(BaseModel): +class UserRole(str, Enum): + USER = "user" + ADMIN = "admin" + +class User(BaseModel): id: Optional[int] = None - title: str - content: str - metadata: Optional[Dict[str, Any]] = None - source_url: Optional[str] = None + username: str + email: EmailStr + password_hash: str + role: UserRole = UserRole.USER created_at: Optional[datetime] = None updated_at: Optional[datetime] = None -class Chunk(BaseModel): +class Category(BaseModel): id: Optional[int] = None - document_id: int + name: str + description: Optional[str] = None + created_at: Optional[datetime] = None + +class Post(BaseModel): + id: Optional[int] = None + title: str content: str - embedding: Optional[List[float]] = None - chunk_index: int - token_count: Optional[int] = None + user_id: int + category_id: int + published: bool = False created_at: Optional[datetime] = None + updated_at: Optional[datetime] = None -class Query(BaseModel): +class Comment(BaseModel): id: Optional[int] = None - query_text: str - query_embedding: Optional[List[float]] = None - results_found: int = 0 + content: str + user_id: int + post_id: int created_at: Optional[datetime] = None ``` ### Generated SQLAlchemy models ```python -from sqlalchemy import Column, Integer, String, Text, JSON, DateTime, ForeignKey +from sqlalchemy import Column, Integer, String, Text, Boolean, DateTime, ForeignKey, Enum from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship from datetime import datetime +import enum Base = declarative_base() -class Document(Base): - __tablename__ = 'documents' +class UserRole(enum.Enum): + USER = "user" + ADMIN = "admin" + +class User(Base): + __tablename__ = 'users' id = Column(Integer, primary_key=True, autoincrement=True) - title = Column(String(255), nullable=False) - content = Column(Text, nullable=False) - metadata = Column(JSON) - source_url = Column(String(500)) + username = Column(String(50), unique=True, nullable=False) + email = Column(String(255), unique=True, nullable=False) + password_hash = Column(String(255), nullable=False) + role = Column(Enum(UserRole), default=UserRole.USER) created_at = Column(DateTime, default=datetime.utcnow) updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) - chunks = relationship("Chunk", back_populates="document", cascade="all, delete-orphan") + posts = relationship("Post", back_populates="user") + comments = relationship("Comment", back_populates="user") -class Chunk(Base): - __tablename__ = 'chunks' +class Category(Base): + __tablename__ = 'categories' id = Column(Integer, primary_key=True, autoincrement=True) - document_id = Column(Integer, ForeignKey('documents.id'), nullable=False) - content = Column(Text, nullable=False) - embedding = Column(JSON) - chunk_index = Column(Integer, nullable=False) - token_count = Column(Integer) + name = Column(String(100), unique=True, nullable=False) + description = Column(Text) created_at = Column(DateTime, default=datetime.utcnow) - document = relationship("Document", back_populates="chunks") + posts = relationship("Post", back_populates="category") -class Query(Base): - __tablename__ = 'queries' +class Post(Base): + __tablename__ = 'posts' id = Column(Integer, primary_key=True, autoincrement=True) - query_text = Column(Text, nullable=False) - query_embedding = Column(JSON) - results_found = Column(Integer, default=0) + title = Column(String(255), nullable=False) + content = Column(Text, nullable=False) + user_id = Column(Integer, ForeignKey('users.id'), nullable=False) + category_id = Column(Integer, ForeignKey('categories.id'), nullable=False) + published = Column(Boolean, default=False) created_at = Column(DateTime, default=datetime.utcnow) -``` - -## RAG integration examples - -### Storing documents and embeddings - -```python -import requests -from typing import List + updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) -# Store a document -document_data = { - "title": "Machine Learning Basics", - "content": "Machine learning is a subset of artificial intelligence...", - "metadata": {"category": "education", "tags": ["AI", "ML"]}, - "source_url": "https://example.com/ml-basics" -} + user = relationship("User", back_populates="posts") + category = relationship("Category", back_populates="posts") + comments = relationship("Comment", back_populates="post", cascade="all, delete-orphan") -response = requests.post("https://api.gibsonai.com/v1/-/documents", json=document_data) -document = response.json() +class Comment(Base): + __tablename__ = 'comments' -# Store chunks with embeddings -chunk_data = { - "document_id": document["id"], - "content": "Machine learning is a subset of artificial intelligence", - "embedding": [0.1, 0.2, 0.3, ...], # Your embedding vector - "chunk_index": 0, - "token_count": 150 -} + id = Column(Integer, primary_key=True, autoincrement=True) + content = Column(Text, nullable=False) + user_id = Column(Integer, ForeignKey('users.id'), nullable=False) + post_id = Column(Integer, ForeignKey('posts.id'), nullable=False) + created_at = Column(DateTime, default=datetime.utcnow) -response = requests.post("https://api.gibsonai.com/v1/-/chunks", json=chunk_data) + user = relationship("User", back_populates="comments") + post = relationship("Post", back_populates="comments") ``` -### Querying with text-to-SQL +## Integration examples + +### Using Pydantic for validation ```python -# Use text-to-SQL to find similar documents -query = """ -SELECT d.title, d.content, c.content as chunk_content -FROM documents d -JOIN chunks c ON d.id = c.document_id -WHERE JSON_EXTRACT(d.metadata, '$.category') = 'education' -ORDER BY d.created_at DESC -LIMIT 10 -""" +from pydantic import ValidationError -response = requests.post("https://api.gibsonai.com/v1/-/query", json={"query": query}) -results = response.json() +# Validate user input +try: + user_data = { + "username": "john_doe", + "email": "john@example.com", + "password_hash": "hashed_password", + "role": "admin" + } + user = User(**user_data) + print(f"Valid user: {user.username}") +except ValidationError as e: + print(f"Validation error: {e}") ``` -### Using SQLAlchemy for vector operations +### Using SQLAlchemy for database operations ```python -from sqlalchemy import create_engine, func +from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker # Use connection string from GibsonAI UI @@ -9034,27 +9131,53 @@ engine = create_engine("your-connection-string-from-ui") Session = sessionmaker(bind=engine) session = Session() -# Find documents by metadata -documents = session.query(Document).filter( - func.json_extract(Document.metadata, '$.category') == 'education' +# Create new user +new_user = User( + username="jane_doe", + email="jane@example.com", + password_hash="hashed_password", + role=UserRole.USER +) +session.add(new_user) +session.commit() + +# Query with relationships +posts_with_authors = session.query(Post).join(User).filter( + User.role == UserRole.ADMIN ).all() +``` -# Get chunks for a document -document_id = 1 -chunks = session.query(Chunk).filter( - Chunk.document_id == document_id -).order_by(Chunk.chunk_index).all() +### Using RESTful APIs + +```python +import requests + +# Create a new post +post_data = { + "title": "My First Post", + "content": "This is the content of my first post.", + "user_id": 1, + "category_id": 1, + "published": True +} + +response = requests.post("https://api.gibsonai.com/v1/-/posts", json=post_data) +new_post = response.json() + +# Get posts with filtering +response = requests.get("https://api.gibsonai.com/v1/-/posts?published=true&limit=10") +posts = response.json() ``` ## Use cases -AI-driven schema generation +SQL to database Unified API layer -Database optimization +Schema migrations @@ -9064,215 +9187,421 @@ chunks = session.query(Chunk).filter( --- -title: SQL to fully functional database -subtitle: Transform SQL schemas into production-ready databases with APIs, Pydantic models, and SQLAlchemy integration +title: Unified API layer for applications +subtitle: Create a unified API layer for your applications with GibsonAI's automatically generated RESTful APIs and Python models enableTableOfContents: true updatedOn: '2025-01-08T00:00:00.000Z' --- -Transform your SQL schemas into production-ready databases with automatically generated APIs, Pydantic models, and SQLAlchemy integration. Go from schema to production in minutes, not days. +Create a unified API layer for your applications with GibsonAI's automatically generated RESTful APIs and Python models. Simplify data access patterns by providing a consistent interface to your hosted MySQL database with Pydantic schemas and SQLAlchemy models. + +## How it works + +GibsonAI automatically generates RESTful APIs for your database schema, providing a unified interface for your applications. Each table in your schema gets full CRUD operations with pagination, filtering, and sorting support, along with corresponding Pydantic schemas and SQLAlchemy models for Python integration. + + -## How it works +Auto-generated APIs -Import your SQL schema file, and GibsonAI automatically creates a fully functional MySQL database with RESTful APIs, Pydantic schemas, and SQLAlchemy models. Perfect for rapid prototyping and production deployments. +Python models + +Unified access + +Text-to-SQL + + ## Key Features -### SQL Schema Import +### Automatic API Generation -- **SQL Schema Files**: Direct import from .sql files with DDL statements -- **MySQL Support**: Full support for MySQL databases and syntax -- **Relationship Detection**: Automatically detects foreign key relationships -- **Data Type Mapping**: Intelligent mapping of SQL data types to Python types +- **RESTful Endpoints**: Full CRUD operations for all database tables +- **Pagination Support**: Built-in pagination for large datasets +- **Filtering and Sorting**: Advanced filtering and sorting capabilities +- **OpenAPI Documentation**: Automatically generated API documentation -### Automatic Code Generation +### Python Integration -- **Pydantic Schemas**: Generate validation schemas for all your tables -- **SQLAlchemy Models**: Create ORM models for database interactions -- **API Documentation**: Automatically generated OpenAPI specifications -- **Python Integration**: Ready-to-use Python code for your applications +- **Pydantic Schemas**: Type-safe validation schemas for API requests/responses +- **SQLAlchemy Models**: ORM models for direct database operations +- **Code Generation**: Automatically generated Python code for integration +- **Framework Support**: Compatible with FastAPI, Flask, and other Python frameworks ### Text-to-SQL Analysis -- **Natural Language Queries**: Ask questions about your imported data +- **Natural Language Queries**: Ask questions about your data - **Gibson Studio**: Run generated SQL queries in the intuitive data management UI -- **Data Exploration**: Discover patterns and insights in your imported schema -- **Query Generation**: Automatically generate SQL from natural language +- **API Analytics**: Analyze API usage patterns and performance ## Step-by-step guide -### 1. Import your SQL schema +### 1. Generate your database schema ```bash -# Import from SQL file -gibson import mysql - -# Or import from existing database -gibson import mysql +# Create a comprehensive application schema +gibson modify "Create an application with users, posts, comments, and tags. Users can create posts, posts can have multiple tags, and users can comment on posts" ``` ### 2. Generate Python models ```bash -# Generate Pydantic schemas +# Generate Pydantic schemas for API validation gibson code schemas -# Generate SQLAlchemy models +# Generate SQLAlchemy models for database operations gibson code models # Generate all Python code gibson code base ``` -### 3. Explore with text-to-SQL +### 3. Explore your API with text-to-SQL -Use Gibson Studio to analyze your imported schema: +Use Gibson Studio to analyze your data and API usage: -- "Show me all tables and their relationships" -- "Which tables have the most foreign key constraints?" -- "Find any tables without primary keys" -- "What's the structure of the users table?" +- "Show me the most popular posts by comment count" +- "Which users are most active in commenting?" +- "What are the trending tags this month?" +- "Find posts with no comments" -### 4. Deploy and access your database +### 4. Access your unified API -Your database is automatically deployed with: +Your API is automatically available at: -- **RESTful APIs**: Base URL `https://api.gibsonai.com` - - SQL queries: `/v1/-/query` - - Table operations: `/v1/-/[table-name-in-kebab-case]` +- **Base URL**: `https://api.gibsonai.com` +- **SQL Queries**: `/v1/-/query` +- **Table Operations**: `/v1/-/[table-name-in-kebab-case]` - **OpenAPI Spec**: Available in your project settings -- **Direct Connection**: Connection string available in the UI - **API Documentation**: Available in the data API section -## Example SQL import +## Example unified API layer -### Sample schema file +### Generated database schema ```sql --- users.sql +-- Users table CREATE TABLE users ( id INT AUTO_INCREMENT PRIMARY KEY, + username VARCHAR(50) UNIQUE NOT NULL, email VARCHAR(255) UNIQUE NOT NULL, - name VARCHAR(100) NOT NULL, + bio TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); +-- Posts table CREATE TABLE posts ( id INT AUTO_INCREMENT PRIMARY KEY, title VARCHAR(255) NOT NULL, - content TEXT, + content TEXT NOT NULL, user_id INT NOT NULL, + published BOOLEAN DEFAULT FALSE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, FOREIGN KEY (user_id) REFERENCES users(id) ); + +-- Comments table +CREATE TABLE comments ( + id INT AUTO_INCREMENT PRIMARY KEY, + content TEXT NOT NULL, + user_id INT NOT NULL, + post_id INT NOT NULL, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, + FOREIGN KEY (user_id) REFERENCES users(id), + FOREIGN KEY (post_id) REFERENCES posts(id) +); + +-- Tags table +CREATE TABLE tags ( + id INT AUTO_INCREMENT PRIMARY KEY, + name VARCHAR(100) UNIQUE NOT NULL, + description TEXT +); + +-- Post-tags relationship +CREATE TABLE post_tags ( + post_id INT NOT NULL, + tag_id INT NOT NULL, + PRIMARY KEY (post_id, tag_id), + FOREIGN KEY (post_id) REFERENCES posts(id), + FOREIGN KEY (tag_id) REFERENCES tags(id) +); ``` ### Generated Pydantic schemas ```python -from pydantic import BaseModel +from pydantic import BaseModel, EmailStr from datetime import datetime -from typing import Optional +from typing import Optional, List class User(BaseModel): id: Optional[int] = None - email: str - name: str + username: str + email: EmailStr + bio: Optional[str] = None created_at: Optional[datetime] = None class Post(BaseModel): id: Optional[int] = None title: str - content: Optional[str] = None + content: str + user_id: int + published: bool = False + created_at: Optional[datetime] = None + +class Comment(BaseModel): + id: Optional[int] = None + content: str user_id: int + post_id: int created_at: Optional[datetime] = None + +class Tag(BaseModel): + id: Optional[int] = None + name: str + description: Optional[str] = None + +class PostTag(BaseModel): + post_id: int + tag_id: int ``` ### Generated SQLAlchemy models ```python -from sqlalchemy import Column, Integer, String, Text, DateTime, ForeignKey +from sqlalchemy import Column, Integer, String, Text, Boolean, DateTime, ForeignKey, Table from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship +from datetime import datetime Base = declarative_base() +# Association table for many-to-many relationship +post_tags = Table('post_tags', Base.metadata, + Column('post_id', Integer, ForeignKey('posts.id'), primary_key=True), + Column('tag_id', Integer, ForeignKey('tags.id'), primary_key=True) +) + class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True, autoincrement=True) + username = Column(String(50), unique=True, nullable=False) email = Column(String(255), unique=True, nullable=False) - name = Column(String(100), nullable=False) + bio = Column(Text) created_at = Column(DateTime, default=datetime.utcnow) posts = relationship("Post", back_populates="user") + comments = relationship("Comment", back_populates="user") class Post(Base): __tablename__ = 'posts' id = Column(Integer, primary_key=True, autoincrement=True) title = Column(String(255), nullable=False) - content = Column(Text) + content = Column(Text, nullable=False) + user_id = Column(Integer, ForeignKey('users.id'), nullable=False) + published = Column(Boolean, default=False) + created_at = Column(DateTime, default=datetime.utcnow) + + user = relationship("User", back_populates="posts") + comments = relationship("Comment", back_populates="post") + tags = relationship("Tag", secondary=post_tags, back_populates="posts") + +class Comment(Base): + __tablename__ = 'comments' + + id = Column(Integer, primary_key=True, autoincrement=True) + content = Column(Text, nullable=False) user_id = Column(Integer, ForeignKey('users.id'), nullable=False) + post_id = Column(Integer, ForeignKey('posts.id'), nullable=False) created_at = Column(DateTime, default=datetime.utcnow) - user = relationship("User", back_populates="posts") + user = relationship("User", back_populates="comments") + post = relationship("Post", back_populates="comments") + +class Tag(Base): + __tablename__ = 'tags' + + id = Column(Integer, primary_key=True, autoincrement=True) + name = Column(String(100), unique=True, nullable=False) + description = Column(Text) + + posts = relationship("Post", secondary=post_tags, back_populates="tags") +``` + +## API integration examples + +### Using RESTful APIs + +```python +import requests + +# Create a new user +user_data = { + "username": "john_doe", + "email": "john@example.com", + "bio": "Software developer" +} +response = requests.post("https://api.gibsonai.com/v1/-/users", json=user_data) +new_user = response.json() + +# Get all posts with pagination +response = requests.get("https://api.gibsonai.com/v1/-/posts?page=1&limit=10") +posts = response.json() + +# Create a new post +post_data = { + "title": "My First Post", + "content": "This is my first blog post!", + "user_id": new_user["id"], + "published": True +} +response = requests.post("https://api.gibsonai.com/v1/-/posts", json=post_data) +new_post = response.json() + +# Add tags to the post +tag_data = {"name": "technology", "description": "Tech-related posts"} +response = requests.post("https://api.gibsonai.com/v1/-/tags", json=tag_data) +tag = response.json() + +# Associate tag with post +post_tag_data = {"post_id": new_post["id"], "tag_id": tag["id"]} +response = requests.post("https://api.gibsonai.com/v1/-/post-tags", json=post_tag_data) +``` + +### Using direct SQL queries + +```python +# Complex queries using text-to-SQL +query = """ +SELECT p.title, p.content, u.username, COUNT(c.id) as comment_count +FROM posts p +JOIN users u ON p.user_id = u.id +LEFT JOIN comments c ON p.id = c.post_id +WHERE p.published = true +GROUP BY p.id, p.title, p.content, u.username +ORDER BY comment_count DESC +LIMIT 10 +""" + +response = requests.post("https://api.gibsonai.com/v1/-/query", json={"query": query}) +popular_posts = response.json() +``` + +### Using SQLAlchemy models + +```python +from sqlalchemy import create_engine, func +from sqlalchemy.orm import sessionmaker + +# Use connection string from GibsonAI UI +engine = create_engine("your-connection-string-from-ui") +Session = sessionmaker(bind=engine) +session = Session() + +# Complex queries using SQLAlchemy +popular_posts = session.query( + Post.title, + Post.content, + User.username, + func.count(Comment.id).label('comment_count') +).join(User).outerjoin(Comment).filter( + Post.published +).group_by(Post.id).order_by( + func.count(Comment.id).desc() +).limit(10).all() + +# Query with relationships +user_with_posts = session.query(User).filter( + User.username == 'john_doe' +).first() +user_posts = user_with_posts.posts ``` -## Integration examples +## Advanced API features -### Using the RESTful API +### Filtering and pagination ```python -import requests - -# Get all users -response = requests.get("https://api.gibsonai.com/v1/-/users") -users = response.json() +# Filter posts by published status with pagination +response = requests.get("https://api.gibsonai.com/v1/-/posts", params={ + "published": "true", + "page": 1, + "limit": 5, + "sort": "created_at", + "order": "desc" +}) +filtered_posts = response.json() -# Create a new user -user_data = { - "email": "john@example.com", - "name": "John Doe" -} -response = requests.post("https://api.gibsonai.com/v1/-/users", json=user_data) +# Filter users by username pattern +response = requests.get("https://api.gibsonai.com/v1/-/users", params={ + "username__like": "john%" +}) +matching_users = response.json() ``` -### Using direct SQL queries +### Relationship queries ```python -import requests +# Get user with their posts and comments +user_id = 1 +response = requests.get(f"https://api.gibsonai.com/v1/-/users/{user_id}", params={ + "include": "posts,comments" +}) +user_with_relations = response.json() -# Query with text-to-SQL -query = "SELECT * FROM users WHERE created_at > '2024-01-01'" -response = requests.post("https://api.gibsonai.com/v1/-/query", json={"query": query}) -results = response.json() +# Get posts with their tags +response = requests.get("https://api.gibsonai.com/v1/-/posts", params={ + "include": "tags,user" +}) +posts_with_tags = response.json() ``` -### Using SQLAlchemy models +## Text-to-SQL analysis examples -```python -from sqlalchemy import create_engine -from sqlalchemy.orm import sessionmaker +### Popular content analysis -# Use connection string from GibsonAI UI -engine = create_engine("your-connection-string-from-ui") -Session = sessionmaker(bind=engine) -session = Session() +```sql +-- Generated from: "Show me the most popular tags by post count" +SELECT + t.name, + t.description, + COUNT(pt.post_id) as post_count +FROM tags t +LEFT JOIN post_tags pt ON t.id = pt.tag_id +GROUP BY t.id, t.name, t.description +ORDER BY post_count DESC +LIMIT 10; +``` -# Query using SQLAlchemy -users = session.query(User).filter(User.created_at > '2024-01-01').all() +### User engagement metrics + +```sql +-- Generated from: "Find users with the highest engagement" +SELECT + u.username, + u.email, + COUNT(DISTINCT p.id) as post_count, + COUNT(DISTINCT c.id) as comment_count, + COUNT(DISTINCT p.id) + COUNT(DISTINCT c.id) as total_activity +FROM users u +LEFT JOIN posts p ON u.id = p.user_id +LEFT JOIN comments c ON u.id = c.user_id +GROUP BY u.id, u.username, u.email +ORDER BY total_activity DESC +LIMIT 10; ``` ## Use cases -RAG schema generation +AI-driven schema generation -Unified API layer +SQL to database -Schema updates and migrations +Schema migrations @@ -9282,64 +9611,63 @@ users = session.query(User).filter(User.created_at > '2024-01-01').all() --- -title: Schema Management for Feature Development -subtitle: Manage database schema changes alongside feature development with development and production environments and Python models +title: Prompt-Driven Schema Generation for RAG Workflows +subtitle: Generate database schemas for RAG applications using natural language prompts with Pydantic and SQLAlchemy models enableTableOfContents: true updatedOn: '2025-01-08T00:00:00.000Z' --- -Manage database schema changes alongside feature development using GibsonAI's development and production environments with automatic Python model generation. Deploy schema updates safely and coordinate database changes with application features. +Generate database schemas for RAG applications using natural language prompts with Pydantic and SQLAlchemy models. Create optimized schemas for vector storage and retrieval workflows with AI-powered schema generation tailored for AI applications. ## How it works -GibsonAI provides separate development and production environments, allowing you to develop and test schema changes safely before deploying to production. The system automatically generates Pydantic schemas and SQLAlchemy models for seamless Python integration with your feature development workflow. +Describe your RAG use case in natural language, and GibsonAI generates an optimized MySQL database schema with vector storage capabilities and retrieval optimization. The system automatically creates Pydantic schemas and SQLAlchemy models for seamless Python integration. -Environment management +AI-optimized schemas -Python models +Vector storage -Safe deployments +Python integration -Text-to-SQL +Text-to-SQL ## Key Features -### Environment-Based Development +### RAG-Optimized Schema Generation -- **Development Environment**: Safe testing ground for new schema changes -- **Production Environment**: Zero-downtime deployments to production -- **Environment Isolation**: Complete separation between development and production -- **Schema Synchronization**: Automatic promotion from development to production +- **Vector Storage**: Optimized tables for storing embeddings and vectors +- **Document Management**: Efficient document storage and indexing +- **Metadata Handling**: Structured metadata storage for enhanced retrieval +- **Relationship Modeling**: Proper relationships between documents, chunks, and vectors -### Python Integration +### Python AI Integration -- **Pydantic Schemas**: Type-safe models for feature validation +- **Pydantic Schemas**: Type-safe models for data validation - **SQLAlchemy Models**: ORM models for database operations -- **Automatic Updates**: Models updated automatically with schema changes -- **Feature-Specific Models**: Models tailored for your feature requirements +- **API Generation**: RESTful APIs for vector operations +- **Embedding Support**: Optimized storage for AI embeddings ### Text-to-SQL Analysis -- **Feature Analysis**: Ask questions about your feature data -- **Gibson Studio**: Run generated SQL queries to analyze feature usage -- **Schema Validation**: Validate schema changes before deployment -- **Performance Monitoring**: Monitor feature performance with natural language queries +- **Natural Language Queries**: Ask questions about your RAG data +- **Gibson Studio**: Run generated SQL queries in the intuitive data management UI +- **Vector Analysis**: Analyze vector similarity and document relationships +- **Performance Insights**: Optimize RAG query performance ## Step-by-step guide -### 1. Develop schema changes for your feature +### 1. Generate RAG schema with natural language ```bash -# Working in development environment -# Create schema changes for your feature -gibson modify users "Add a preferences column for user settings and a feature_flags column to track enabled features" +# Create a schema for document-based RAG +gibson modify documents "Create a RAG system with documents table containing title, content, and metadata, a chunks table for text segments with embeddings, and a queries table for tracking search queries" ``` -### 2. Generate Python models for your feature +### 2. Generate Python models ```bash # Generate Pydantic schemas for validation @@ -9352,221 +9680,223 @@ gibson code models gibson code base ``` -### 3. Test your feature with text-to-SQL - -Use Gibson Studio to analyze your feature data: +### 3. Explore RAG data with text-to-SQL -- "Show me users who have enabled the new feature" -- "What percentage of users have customized their preferences?" -- "Find any users with empty feature_flags" -- "Show feature adoption rates over time" +Use Gibson Studio to analyze your RAG system: -### 4. Deploy to production +- "Show me the most frequently queried documents" +- "Find documents with similar embeddings to a specific vector" +- "What's the average chunk size across all documents?" +- "Find documents that haven't been queried in the last month" -```bash -# Working in production environment -# Deploy your validated schema changes -gibson deploy -``` +### 4. Access your RAG data -## Example feature development workflow +Integration options: -### Feature schema changes +- **RESTful APIs**: Base URL `https://api.gibsonai.com` + - SQL queries: `/v1/-/query` + - Table operations: `/v1/-/[table-name-in-kebab-case]` +- **OpenAPI Spec**: Available in your project settings +- **Direct Connection**: Connection string available in the UI +- **API Documentation**: Available in the data API section -```bash -# Add schema for a new notification feature -gibson modify notifications "Create a notifications table with user_id, message, type, read status, and created_at timestamp" -``` +## Example RAG schema ### Generated database schema ```sql --- Generated for notification feature -CREATE TABLE notifications ( +-- Documents table for storing source documents +CREATE TABLE documents ( id INT AUTO_INCREMENT PRIMARY KEY, - user_id INT NOT NULL, - message TEXT NOT NULL, - type ENUM('info', 'warning', 'success', 'error') DEFAULT 'info', - read_status BOOLEAN DEFAULT FALSE, + title VARCHAR(255) NOT NULL, + content TEXT NOT NULL, + metadata JSON, + source_url VARCHAR(500), created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, - FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE + updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP +); + +-- Chunks table for storing text segments +CREATE TABLE chunks ( + id INT AUTO_INCREMENT PRIMARY KEY, + document_id INT NOT NULL, + content TEXT NOT NULL, + embedding JSON, + chunk_index INT NOT NULL, + token_count INT, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, + FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE +); + +-- Queries table for tracking search queries +CREATE TABLE queries ( + id INT AUTO_INCREMENT PRIMARY KEY, + query_text TEXT NOT NULL, + query_embedding JSON, + results_found INT DEFAULT 0, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); ``` -### Generated Pydantic schema +### Generated Pydantic schemas ```python from pydantic import BaseModel from datetime import datetime -from typing import Optional -from enum import Enum +from typing import Optional, Dict, Any, List -class NotificationType(str, Enum): - INFO = "info" - WARNING = "warning" - SUCCESS = "success" - ERROR = "error" +class Document(BaseModel): + id: Optional[int] = None + title: str + content: str + metadata: Optional[Dict[str, Any]] = None + source_url: Optional[str] = None + created_at: Optional[datetime] = None + updated_at: Optional[datetime] = None -class Notification(BaseModel): +class Chunk(BaseModel): + id: Optional[int] = None + document_id: int + content: str + embedding: Optional[List[float]] = None + chunk_index: int + token_count: Optional[int] = None + created_at: Optional[datetime] = None + +class Query(BaseModel): id: Optional[int] = None - user_id: int - message: str - type: NotificationType = NotificationType.INFO - read_status: bool = False + query_text: str + query_embedding: Optional[List[float]] = None + results_found: int = 0 created_at: Optional[datetime] = None ``` -### Generated SQLAlchemy model +### Generated SQLAlchemy models ```python -from sqlalchemy import Column, Integer, String, Text, Boolean, DateTime, ForeignKey, Enum +from sqlalchemy import Column, Integer, String, Text, JSON, DateTime, ForeignKey from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship from datetime import datetime -import enum Base = declarative_base() -class NotificationType(enum.Enum): - INFO = "info" - WARNING = "warning" - SUCCESS = "success" - ERROR = "error" - -class Notification(Base): - __tablename__ = 'notifications' +class Document(Base): + __tablename__ = 'documents' id = Column(Integer, primary_key=True, autoincrement=True) - user_id = Column(Integer, ForeignKey('users.id'), nullable=False) - message = Column(Text, nullable=False) - type = Column(Enum(NotificationType), default=NotificationType.INFO) - read_status = Column(Boolean, default=False) + title = Column(String(255), nullable=False) + content = Column(Text, nullable=False) + metadata = Column(JSON) + source_url = Column(String(500)) created_at = Column(DateTime, default=datetime.utcnow) + updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) - user = relationship("User", back_populates="notifications") -``` - -## Feature integration examples - -### Using Pydantic for feature validation - -```python -from pydantic import ValidationError - -# Validate notification data -try: - notification_data = { - "user_id": 1, - "message": "Welcome to our new feature!", - "type": "success" - } - notification = Notification(**notification_data) - print(f"Valid notification: {notification.message}") -except ValidationError as e: - print(f"Validation error: {e}") -``` + chunks = relationship("Chunk", back_populates="document", cascade="all, delete-orphan") -### Using SQLAlchemy for feature operations +class Chunk(Base): + __tablename__ = 'chunks' -```python -from sqlalchemy import create_engine -from sqlalchemy.orm import sessionmaker + id = Column(Integer, primary_key=True, autoincrement=True) + document_id = Column(Integer, ForeignKey('documents.id'), nullable=False) + content = Column(Text, nullable=False) + embedding = Column(JSON) + chunk_index = Column(Integer, nullable=False) + token_count = Column(Integer) + created_at = Column(DateTime, default=datetime.utcnow) -# Use connection string from GibsonAI UI -engine = create_engine("your-connection-string-from-ui") -Session = sessionmaker(bind=engine) -session = Session() + document = relationship("Document", back_populates="chunks") -# Create notification for feature rollout -new_notification = Notification( - user_id=1, - message="New feature is now available!", - type=NotificationType.SUCCESS -) -session.add(new_notification) -session.commit() +class Query(Base): + __tablename__ = 'queries' -# Query unread notifications -unread_notifications = session.query(Notification).filter( - Notification.read_status == False -).all() + id = Column(Integer, primary_key=True, autoincrement=True) + query_text = Column(Text, nullable=False) + query_embedding = Column(JSON) + results_found = Column(Integer, default=0) + created_at = Column(DateTime, default=datetime.utcnow) ``` -### Using RESTful APIs for feature integration +## RAG integration examples + +### Storing documents and embeddings ```python import requests +from typing import List -# Create notification via API -notification_data = { - "user_id": 1, - "message": "Feature successfully enabled!", - "type": "success" +# Store a document +document_data = { + "title": "Machine Learning Basics", + "content": "Machine learning is a subset of artificial intelligence...", + "metadata": {"category": "education", "tags": ["AI", "ML"]}, + "source_url": "https://example.com/ml-basics" } -response = requests.post("https://api.gibsonai.com/v1/-/notifications", json=notification_data) -new_notification = response.json() +response = requests.post("https://api.gibsonai.com/v1/-/documents", json=document_data) +document = response.json() -# Get unread notifications -response = requests.get("https://api.gibsonai.com/v1/-/notifications?read_status=false") -unread = response.json() -``` +# Store chunks with embeddings +chunk_data = { + "document_id": document["id"], + "content": "Machine learning is a subset of artificial intelligence", + "embedding": [0.1, 0.2, 0.3, ...], # Your embedding vector + "chunk_index": 0, + "token_count": 150 +} -## Feature rollout strategies +response = requests.post("https://api.gibsonai.com/v1/-/chunks", json=chunk_data) +``` -### Gradual rollout monitoring +### Querying with text-to-SQL -Use text-to-SQL to monitor feature adoption: +```python +# Use text-to-SQL to find similar documents +query = """ +SELECT d.title, d.content, c.content as chunk_content +FROM documents d +JOIN chunks c ON d.id = c.document_id +WHERE JSON_EXTRACT(d.metadata, '$.category') = 'education' +ORDER BY d.created_at DESC +LIMIT 10 +""" -```sql --- Generated from: "Show feature adoption rate by day" -SELECT - DATE(created_at) as rollout_date, - COUNT(*) as notifications_sent, - COUNT(CASE WHEN read_status = true THEN 1 END) as read_notifications -FROM notifications -WHERE type = 'success' -AND message LIKE '%feature%' -GROUP BY DATE(created_at) -ORDER BY rollout_date; +response = requests.post("https://api.gibsonai.com/v1/-/query", json={"query": query}) +results = response.json() ``` -### Feature performance analysis +### Using SQLAlchemy for vector operations -```sql --- Generated from: "Find users most engaged with notifications" -SELECT - u.id, - u.username, - COUNT(n.id) as total_notifications, - COUNT(CASE WHEN n.read_status = true THEN 1 END) as read_notifications, - ROUND(COUNT(CASE WHEN n.read_status = true THEN 1 END) / COUNT(n.id) * 100, 2) as read_percentage -FROM users u -LEFT JOIN notifications n ON u.id = n.user_id -GROUP BY u.id, u.username -ORDER BY read_percentage DESC; -``` +```python +from sqlalchemy import create_engine, func +from sqlalchemy.orm import sessionmaker -## Access your feature data +# Use connection string from GibsonAI UI +engine = create_engine("your-connection-string-from-ui") +Session = sessionmaker(bind=engine) +session = Session() -Integration options: +# Find documents by metadata +documents = session.query(Document).filter( + func.json_extract(Document.metadata, '$.category') == 'education' +).all() -- **RESTful APIs**: Base URL `https://api.gibsonai.com` - - SQL queries: `/v1/-/query` - - Table operations: `/v1/-/[table-name-in-kebab-case]` -- **OpenAPI Spec**: Available in your project settings -- **Direct Connection**: Connection string available in the UI -- **API Documentation**: Available in the data API section +# Get chunks for a document +document_id = 1 +chunks = session.query(Chunk).filter( + Chunk.document_id == document_id +).order_by(Chunk.chunk_index).all() +``` ## Use cases -Schema versioning +AI-driven schema generation Unified API layer -Schema migrations +Database optimization @@ -9576,78 +9906,78 @@ Integration options: --- -title: Provision a database for rapid development -subtitle: Rapidly spin up MySQL databases for prototyping, hackathons, and experimentation with Python models +title: SQL to fully functional database +subtitle: Transform SQL schemas into production-ready databases with APIs, Pydantic models, and SQLAlchemy integration enableTableOfContents: true updatedOn: '2025-01-08T00:00:00.000Z' --- -Rapidly spin up MySQL databases for prototyping, hackathons, or experimentation with Python models. Get a full-featured database environment with APIs, Pydantic schemas, and SQLAlchemy models in seconds without configuration overhead. Perfect for developers who need to move fast and iterate quickly. +Transform your SQL schemas into production-ready databases with automatically generated APIs, Pydantic models, and SQLAlchemy integration. Go from schema to production in minutes, not days. ## How it works -GibsonAI provides instant database provisioning with zero configuration. Simply describe what you need in natural language, and get a fully functional MySQL database with RESTful APIs, Pydantic schemas, SQLAlchemy models, and hosting ready to use immediately. - - - -Instant provisioning - -Python models - -Zero configuration - -Text-to-SQL - - +Import your SQL schema file, and GibsonAI automatically creates a fully functional MySQL database with RESTful APIs, Pydantic schemas, and SQLAlchemy models. Perfect for rapid prototyping and production deployments. ## Key Features -### Instant Database Creation +### SQL Schema Import -- **Zero Setup**: No configuration files or infrastructure setup required -- **MySQL Database**: Fully managed MySQL database with autoscaling -- **RESTful APIs**: Complete CRUD operations automatically generated -- **Immediate Availability**: Database and APIs available instantly +- **SQL Schema Files**: Direct import from .sql files with DDL statements +- **MySQL Support**: Full support for MySQL databases and syntax +- **Relationship Detection**: Automatically detects foreign key relationships +- **Data Type Mapping**: Intelligent mapping of SQL data types to Python types -### Python Development Ready +### Automatic Code Generation -- **Pydantic Schemas**: Type-safe validation schemas for all tables -- **SQLAlchemy Models**: ORM models for database operations -- **Code Generation**: Automatically generated Python code for integration -- **Framework Support**: Compatible with FastAPI, Flask, Django, and more +- **Pydantic Schemas**: Generate validation schemas for all your tables +- **SQLAlchemy Models**: Create ORM models for database interactions +- **API Documentation**: Automatically generated OpenAPI specifications +- **Python Integration**: Ready-to-use Python code for your applications ### Text-to-SQL Analysis -- **Natural Language Queries**: Ask questions about your data +- **Natural Language Queries**: Ask questions about your imported data - **Gibson Studio**: Run generated SQL queries in the intuitive data management UI -- **Rapid Prototyping**: Test data structures and queries quickly -- **Real-time Insights**: Get immediate feedback on your data models +- **Data Exploration**: Discover patterns and insights in your imported schema +- **Query Generation**: Automatically generate SQL from natural language ## Step-by-step guide -### 1. Create your database with natural language +### 1. Import your SQL schema ```bash -# Create a simple app database -gibson modify "Create a simple todo app with users and tasks. Users can create multiple tasks with title, description, due date, and completion status" +# Import from SQL file +gibson import mysql + +# Or import from existing database +gibson import mysql ``` ### 2. Generate Python models ```bash -# Generate Pydantic schemas for validation +# Generate Pydantic schemas gibson code schemas -# Generate SQLAlchemy models for database operations +# Generate SQLAlchemy models gibson code models # Generate all Python code gibson code base ``` -### 3. Start building immediately +### 3. Explore with text-to-SQL -Your database is ready with: +Use Gibson Studio to analyze your imported schema: + +- "Show me all tables and their relationships" +- "Which tables have the most foreign key constraints?" +- "Find any tables without primary keys" +- "What's the structure of the users table?" + +### 4. Deploy and access your database + +Your database is automatically deployed with: - **RESTful APIs**: Base URL `https://api.gibsonai.com` - SQL queries: `/v1/-/query` @@ -9656,112 +9986,56 @@ Your database is ready with: - **Direct Connection**: Connection string available in the UI - **API Documentation**: Available in the data API section -### 4. Explore with text-to-SQL - -Use Gibson Studio to test your ideas: - -- "Show me all completed tasks" -- "Which users have the most tasks?" -- "Find overdue tasks" -- "Show task completion rates by user" - -## Example rapid development workflow - -### Quick prototype creation - -```bash -# Create a social media prototype -gibson modify "Create a social media prototype with users, posts, likes, and follows. Users can create posts, like posts, and follow other users" -``` +## Example SQL import -### Generated database schema +### Sample schema file ```sql --- Users table +-- users.sql CREATE TABLE users ( id INT AUTO_INCREMENT PRIMARY KEY, - username VARCHAR(50) UNIQUE NOT NULL, email VARCHAR(255) UNIQUE NOT NULL, - display_name VARCHAR(100), - bio TEXT, + name VARCHAR(100) NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); --- Posts table CREATE TABLE posts ( id INT AUTO_INCREMENT PRIMARY KEY, + title VARCHAR(255) NOT NULL, + content TEXT, user_id INT NOT NULL, - content TEXT NOT NULL, - image_url VARCHAR(500), created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, FOREIGN KEY (user_id) REFERENCES users(id) ); - --- Likes table -CREATE TABLE likes ( - id INT AUTO_INCREMENT PRIMARY KEY, - user_id INT NOT NULL, - post_id INT NOT NULL, - created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, - FOREIGN KEY (user_id) REFERENCES users(id), - FOREIGN KEY (post_id) REFERENCES posts(id), - UNIQUE KEY unique_user_post (user_id, post_id) -); - --- Follows table -CREATE TABLE follows ( - id INT AUTO_INCREMENT PRIMARY KEY, - follower_id INT NOT NULL, - following_id INT NOT NULL, - created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, - FOREIGN KEY (follower_id) REFERENCES users(id), - FOREIGN KEY (following_id) REFERENCES users(id), - UNIQUE KEY unique_follow (follower_id, following_id) -); ``` ### Generated Pydantic schemas ```python -from pydantic import BaseModel, EmailStr +from pydantic import BaseModel from datetime import datetime from typing import Optional class User(BaseModel): id: Optional[int] = None - username: str - email: EmailStr - display_name: Optional[str] = None - bio: Optional[str] = None + email: str + name: str created_at: Optional[datetime] = None class Post(BaseModel): id: Optional[int] = None + title: str + content: Optional[str] = None user_id: int - content: str - image_url: Optional[str] = None - created_at: Optional[datetime] = None - -class Like(BaseModel): - id: Optional[int] = None - user_id: int - post_id: int - created_at: Optional[datetime] = None - -class Follow(BaseModel): - id: Optional[int] = None - follower_id: int - following_id: int created_at: Optional[datetime] = None ``` ### Generated SQLAlchemy models ```python -from sqlalchemy import Column, Integer, String, Text, DateTime, ForeignKey, UniqueConstraint +from sqlalchemy import Column, Integer, String, Text, DateTime, ForeignKey from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship -from datetime import datetime Base = declarative_base() @@ -9769,858 +10043,758 @@ class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True, autoincrement=True) - username = Column(String(50), unique=True, nullable=False) email = Column(String(255), unique=True, nullable=False) - display_name = Column(String(100)) - bio = Column(Text) - created_at = Column(DateTime, default=datetime.utcnow) - - posts = relationship("Post", back_populates="user") - likes = relationship("Like", back_populates="user") - followers = relationship("Follow", foreign_keys="Follow.following_id", back_populates="following") - following = relationship("Follow", foreign_keys="Follow.follower_id", back_populates="follower") - -class Post(Base): - __tablename__ = 'posts' - - id = Column(Integer, primary_key=True, autoincrement=True) - user_id = Column(Integer, ForeignKey('users.id'), nullable=False) - content = Column(Text, nullable=False) - image_url = Column(String(500)) - created_at = Column(DateTime, default=datetime.utcnow) - - user = relationship("User", back_populates="posts") - likes = relationship("Like", back_populates="post") - -class Like(Base): - __tablename__ = 'likes' - - id = Column(Integer, primary_key=True, autoincrement=True) - user_id = Column(Integer, ForeignKey('users.id'), nullable=False) - post_id = Column(Integer, ForeignKey('posts.id'), nullable=False) - created_at = Column(DateTime, default=datetime.utcnow) - - __table_args__ = (UniqueConstraint('user_id', 'post_id', name='unique_user_post'),) - - user = relationship("User", back_populates="likes") - post = relationship("Post", back_populates="likes") - -class Follow(Base): - __tablename__ = 'follows' - - id = Column(Integer, primary_key=True, autoincrement=True) - follower_id = Column(Integer, ForeignKey('users.id'), nullable=False) - following_id = Column(Integer, ForeignKey('users.id'), nullable=False) - created_at = Column(DateTime, default=datetime.utcnow) - - __table_args__ = (UniqueConstraint('follower_id', 'following_id', name='unique_follow'),) - - follower = relationship("User", foreign_keys=[follower_id], back_populates="following") - following = relationship("User", foreign_keys=[following_id], back_populates="followers") -``` - -## Rapid development examples - -### Start coding immediately - -```python -import requests - -# Create a user -user_data = { - "username": "developer123", - "email": "dev@example.com", - "display_name": "Developer", - "bio": "Building awesome stuff!" -} -response = requests.post("https://api.gibsonai.com/v1/-/users", json=user_data) -new_user = response.json() - -# Create a post -post_data = { - "user_id": new_user["id"], - "content": "Just provisioned a database in seconds with GibsonAI!", - "image_url": "https://example.com/screenshot.png" -} -response = requests.post("https://api.gibsonai.com/v1/-/posts", json=post_data) -new_post = response.json() - -# Like the post -like_data = { - "user_id": new_user["id"], - "post_id": new_post["id"] -} -response = requests.post("https://api.gibsonai.com/v1/-/likes", json=like_data) -``` - -### Using SQLAlchemy for complex operations - -```python -from sqlalchemy import create_engine, func -from sqlalchemy.orm import sessionmaker - -# Use connection string from GibsonAI UI -engine = create_engine("your-connection-string-from-ui") -Session = sessionmaker(bind=engine) -session = Session() - -# Get popular posts -popular_posts = session.query( - Post.content, - User.username, - func.count(Like.id).label('like_count') -).join(User).outerjoin(Like).group_by( - Post.id -).order_by( - func.count(Like.id).desc() -).limit(10).all() - -# Get user feed with followed users' posts -user_id = 1 -feed = session.query(Post).join( - Follow, Post.user_id == Follow.following_id -).filter( - Follow.follower_id == user_id -).order_by(Post.created_at.desc()).limit(20).all() -``` - -### Text-to-SQL for rapid analysis - -```python -# Use text-to-SQL for quick insights -query = """ -SELECT u.username, COUNT(p.id) as post_count, COUNT(l.id) as total_likes -FROM users u -LEFT JOIN posts p ON u.id = p.user_id -LEFT JOIN likes l ON p.id = l.post_id -GROUP BY u.id, u.username -ORDER BY total_likes DESC -LIMIT 10 -""" - -response = requests.post("https://api.gibsonai.com/v1/-/query", json={"query": query}) -top_users = response.json() -``` - -## Common use cases - -### Hackathon projects - -```bash -# Create a voting app for hackathons -gibson modify "Create a voting app with users, proposals, and votes. Users can submit proposals and vote on them. Each user can only vote once per proposal" -``` - -### MVP development - -```bash -# Create an e-commerce MVP -gibson modify "Create an e-commerce MVP with products, users, orders, and reviews. Users can browse products, place orders, and leave reviews" -``` - -### Learning projects - -```bash -# Create a learning management system -gibson modify "Create a learning platform with students, courses, lessons, and progress tracking. Students can enroll in courses and track their progress" -``` - -## Performance and scalability - -### Instant scaling - -- **Autoscaling**: Database automatically scales with your application -- **Zero downtime**: No interruptions as your app grows -- **Global availability**: Hosted on scalable cloud infrastructure -- **Performance monitoring**: Built-in performance tracking + name = Column(String(100), nullable=False) + created_at = Column(DateTime, default=datetime.utcnow) -### Development to production + posts = relationship("Post", back_populates="user") -- **Environment promotion**: Easily move from development to production -- **Zero configuration**: No infrastructure changes needed -- **Continuous deployment**: Deploy changes instantly -- **Backup and recovery**: Automatic backups and point-in-time recovery +class Post(Base): + __tablename__ = 'posts' -## Use cases + id = Column(Integer, primary_key=True, autoincrement=True) + title = Column(String(255), nullable=False) + content = Column(Text) + user_id = Column(Integer, ForeignKey('users.id'), nullable=False) + created_at = Column(DateTime, default=datetime.utcnow) - + user = relationship("User", back_populates="posts") +``` -AI-driven schema generation +## Integration examples -Unified API layer +### Using the RESTful API -Feature development +```python +import requests - +# Get all users +response = requests.get("https://api.gibsonai.com/v1/-/users") +users = response.json() -## What's next? +# Create a new user +user_data = { + "email": "john@example.com", + "name": "John Doe" +} +response = requests.post("https://api.gibsonai.com/v1/-/users", json=user_data) +``` - +### Using direct SQL queries +```python +import requests ---- -title: AI-Powered App Builders -subtitle: Build faster full-stack apps with prompts and production-grade database. -updatedOn: '2025-07-10T22:31:52.611Z' ---- +# Query with text-to-SQL +query = "SELECT * FROM users WHERE created_at > '2024-01-01'" +response = requests.post("https://api.gibsonai.com/v1/-/query", json={"query": query}) +results = response.json() +``` - +### Using SQLAlchemy models - +```python +from sqlalchemy import create_engine +from sqlalchemy.orm import sessionmaker - +# Use connection string from GibsonAI UI +engine = create_engine("your-connection-string-from-ui") +Session = sessionmaker(bind=engine) +session = Session() - +# Query using SQLAlchemy +users = session.query(User).filter(User.created_at > '2024-01-01').all() +``` - +## Use cases - + - +RAG schema generation ---- -title: AI Agent Frameworks -subtitle: Build smarter AI agents using popular frameworks that integrate with GibsonAI. -updatedOn: '2025-06-29T22:31:52.611Z' ---- +Unified API layer - +Schema updates and migrations - + - +## What's next? - + - --- -title: How to create a SQL Agent with LangChain, LangGraph and GibsonAI -subtitle: Step-by-step guide on how to create a SQL Agent with LangChain, LangGraph and GibsonAI +title: Schema Management for Feature Development +subtitle: Manage database schema changes alongside feature development with development and production environments and Python models enableTableOfContents: true -updatedOn: '2025-01-29T22:31:52.611Z' +updatedOn: '2025-01-08T00:00:00.000Z' --- -This guide will show you how to build a SQL Agent that can **create, modify, and manage databases** using **[GibsonAI MCP Server](https://docs.gibsonai.com/ai/mcp-server)** and **[LangChain](https://langchain.com/)** with **[LangGraph](https://langchain-ai.github.io/langgraph/)**. - -## What You'll Build - -- A **SQL Agent** powered by LangChain/LangGraph that can: - - **Create new databases and tables** from natural language prompts. - - **Modify existing schemas** (add, remove, or update columns and tables). - - **Deploy schema changes** to serverless databases (e.g., MySQL). - - **Inspect and query database schemas** with conversational commands. - - **Execute SQL queries** and get formatted results. +Manage database schema changes alongside feature development using GibsonAI's development and production environments with automatic Python model generation. Deploy schema updates safely and coordinate database changes with application features. -## Key Concepts +## How it works -- **GibsonAI MCP Server:** Turns natural language prompts into fully functional database schemas. -- **From Prompt to Database:** You can go from describing a database in plain English to having a running schema with deployed APIs in minutes. -- **LangGraph ReAct Agent:** Uses reasoning and action cycles to interact with GibsonAI MCP tools effectively. +GibsonAI provides separate development and production environments, allowing you to develop and test schema changes safely before deploying to production. The system automatically generates Pydantic schemas and SQLAlchemy models for seamless Python integration with your feature development workflow. -> The **GibsonAI MCP integration with LangChain** uses the official MCP adapters to seamlessly connect LangChain agents with GibsonAI's database management capabilities. + -## Prerequisites +Environment management -Before starting, ensure you have: +Python models -1. **A GibsonAI account** – Sign up at [https://app.gibsonai.com](https://app.gibsonai.com/). -2. **Python 3.9+** installed. -3. **OpenAI API key** (you can get one from [OpenAI](https://platform.openai.com/)). +Safe deployments - +Text-to-SQL -## Install UV Package Manager + -[UV](https://docs.astral.sh/uv/) is needed to run GibsonAI CLI. +## Key Features -Run: +### Environment-Based Development -```bash -curl -LsSf https://astral.sh/uv/install.sh | sh -``` +- **Development Environment**: Safe testing ground for new schema changes +- **Production Environment**: Zero-downtime deployments to production +- **Environment Isolation**: Complete separation between development and production +- **Schema Synchronization**: Automatic promotion from development to production -## Install GibsonAI CLI +### Python Integration -The GibsonAI CLI lets you log in and manage projects: +- **Pydantic Schemas**: Type-safe models for feature validation +- **SQLAlchemy Models**: ORM models for database operations +- **Automatic Updates**: Models updated automatically with schema changes +- **Feature-Specific Models**: Models tailored for your feature requirements -```bash -uvx --from gibson-cli@latest gibson auth login -``` +### Text-to-SQL Analysis -Log in with your GibsonAI account. +- **Feature Analysis**: Ask questions about your feature data +- **Gibson Studio**: Run generated SQL queries to analyze feature usage +- **Schema Validation**: Validate schema changes before deployment +- **Performance Monitoring**: Monitor feature performance with natural language queries -## Install Python Dependencies +## Step-by-step guide -Install LangChain, LangGraph, MCP adapters, and OpenAI libraries: +### 1. Develop schema changes for your feature ```bash -pip install mcp langchain-mcp-adapters langgraph langchain-openai +# Working in development environment +# Create schema changes for your feature +gibson modify users "Add a preferences column for user settings and a feature_flags column to track enabled features" ``` -## Set Your OpenAI API Key - -Export your API key: +### 2. Generate Python models for your feature ```bash -export OPENAI_API_KEY="your_openai_api_key" -``` - -*(Replace `your_openai_api_key` with your real key.)* - -## Create a Python File - -Create a new Python file (e.g., `agent.py`) and copy this code: - -```python -import asyncio -import os -from mcp import ClientSession, StdioServerParameters -from mcp.client.stdio import stdio_client -from langchain_mcp_adapters.tools import load_mcp_tools -from langgraph.prebuilt import create_react_agent -from langchain_openai import ChatOpenAI - -class GibsonAIAgent: - """LangChain + LangGraph agent for GibsonAI database management""" - - def __init__(self): - # Initialize OpenAI model - self.model = ChatOpenAI( - model="gpt-4o", - temperature=0.1, - api_key=os.getenv("OPENAI_API_KEY") - ) - - # GibsonAI MCP server parameters - self.server_params = StdioServerParameters( - command="uvx", - args=["--from", "gibson-cli@latest", "gibson", "mcp", "run"] - ) - - async def run_agent(self, message: str) -> None: - """Run the GibsonAI agent with the given message.""" - try: - async with stdio_client(self.server_params) as (read, write): - async with ClientSession(read, write) as session: - # Initialize MCP session - await session.initialize() - - # Load all GibsonAI MCP tools - tools = await load_mcp_tools(session) +# Generate Pydantic schemas for validation +gibson code schemas - # Create ReAct agent with tools - agent = create_react_agent( - self.model, - tools, - state_modifier="""You are a GibsonAI database assistant. - Help users manage their database projects and schemas. +# Generate SQLAlchemy models for database operations +gibson code models - Your capabilities include: - - Run SQL queries and get results - - Creating new GibsonAI projects - - Managing database schemas (tables, columns, relationships) - - Deploying schema changes to hosted databases - - Querying database schemas and data - - Providing insights about database structure and best practices - - Always be helpful and explain what you're doing step by step. - When creating schemas, use appropriate data types and constraints. - Consider relationships between tables and suggest indexes where appropriate. - Be conversational and provide clear explanations of your actions.""", - ) +# Generate all Python code +gibson code base +``` - # Execute the agent - result = await agent.ainvoke( - {"messages": [{"role": "user", "content": message}]} - ) +### 3. Test your feature with text-to-SQL - # Print the response - if "messages" in result: - for msg in result["messages"]: - if hasattr(msg, "content") and msg.content: - print(f"\n🤖 {msg.content}\n") - elif hasattr(msg, "tool_calls") and msg.tool_calls: - for tool_call in msg.tool_calls: - print(f"🛠️ Calling tool: {tool_call['name']}") - if tool_call.get("args"): - print(f" Args: {tool_call['args']}") +Use Gibson Studio to analyze your feature data: - except Exception as e: - print(f"Error running agent: {str(e)}") +- "Show me users who have enabled the new feature" +- "What percentage of users have customized their preferences?" +- "Find any users with empty feature_flags" +- "Show feature adoption rates over time" -async def run_gibsonai_agent(message: str) -> None: - """Convenience function to run the GibsonAI agent""" - agent = GibsonAIAgent() - await agent.run_agent(message) +### 4. Deploy to production -# Example usage -if __name__ == "__main__": - asyncio.run( - run_gibsonai_agent( - "Create a database for a blog posts platform with users and posts tables." - ) - ) +```bash +# Working in production environment +# Deploy your validated schema changes +gibson deploy ``` -## Run the Agent +## Example feature development workflow -Run the script: +### Feature schema changes ```bash -python agent.py +# Add schema for a new notification feature +gibson modify notifications "Create a notifications table with user_id, message, type, read status, and created_at timestamp" ``` -The agent will: +### Generated database schema -- Start the local **GibsonAI MCP Server**. -- Use **LangGraph's ReAct agent** to reason about your request. -- Take your prompt (e.g., "Create a database for a blog with users and posts tables"). -- Automatically create a database schema using GibsonAI tools. -- Show you step-by-step what actions it's taking. +```sql +-- Generated for notification feature +CREATE TABLE notifications ( + id INT AUTO_INCREMENT PRIMARY KEY, + user_id INT NOT NULL, + message TEXT NOT NULL, + type ENUM('info', 'warning', 'success', 'error') DEFAULT 'info', + read_status BOOLEAN DEFAULT FALSE, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, + FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE +); +``` -## View Your Database +### Generated Pydantic schema -Go to your **GibsonAI Dashboard**: +```python +from pydantic import BaseModel +from datetime import datetime +from typing import Optional +from enum import Enum -[https://app.gibsonai.com](https://app.gibsonai.com/) +class NotificationType(str, Enum): + INFO = "info" + WARNING = "warning" + SUCCESS = "success" + ERROR = "error" -Here, you can: +class Notification(BaseModel): + id: Optional[int] = None + user_id: int + message: str + type: NotificationType = NotificationType.INFO + read_status: bool = False + created_at: Optional[datetime] = None +``` -- See your database schema. -- Check generated REST APIs for your data. -- Monitor database performance and usage. +### Generated SQLAlchemy model - +```python +from sqlalchemy import Column, Integer, String, Text, Boolean, DateTime, ForeignKey, Enum +from sqlalchemy.ext.declarative import declarative_base +from sqlalchemy.orm import relationship +from datetime import datetime +import enum -## Example Prompts to Try +Base = declarative_base() -You can experiment with these prompts: +class NotificationType(enum.Enum): + INFO = "info" + WARNING = "warning" + SUCCESS = "success" + ERROR = "error" -- **"Show me the current schema for my project."** -- **"Add a 'products' table with name, price, and description fields."** -- **"Create a 'users' table with authentication fields."** -- **"Deploy my schema changes to production."** -- **"Run a query to show all users from the database."** -- **"Create a new database for an e-commerce platform."** -- **"Add a foreign key relationship between users and posts tables."** +class Notification(Base): + __tablename__ = 'notifications' -## Advanced Features + id = Column(Integer, primary_key=True, autoincrement=True) + user_id = Column(Integer, ForeignKey('users.id'), nullable=False) + message = Column(Text, nullable=False) + type = Column(Enum(NotificationType), default=NotificationType.INFO) + read_status = Column(Boolean, default=False) + created_at = Column(DateTime, default=datetime.utcnow) -### Custom Agent Instructions + user = relationship("User", back_populates="notifications") +``` -You can customize the agent's behavior by modifying the `state_modifier` parameter: +## Feature integration examples + +### Using Pydantic for feature validation ```python -agent = create_react_agent( - self.model, - tools, - state_modifier="""You are a specialized e-commerce database expert. - Focus on creating optimized schemas for online stores with proper - indexing and relationships for high-performance queries.""", -) +from pydantic import ValidationError + +# Validate notification data +try: + notification_data = { + "user_id": 1, + "message": "Welcome to our new feature!", + "type": "success" + } + notification = Notification(**notification_data) + print(f"Valid notification: {notification.message}") +except ValidationError as e: + print(f"Validation error: {e}") ``` -### Error Handling and Logging +### Using SQLAlchemy for feature operations -Add robust error handling for production use: +```python +from sqlalchemy import create_engine +from sqlalchemy.orm import sessionmaker + +# Use connection string from GibsonAI UI +engine = create_engine("your-connection-string-from-ui") +Session = sessionmaker(bind=engine) +session = Session() + +# Create notification for feature rollout +new_notification = Notification( + user_id=1, + message="New feature is now available!", + type=NotificationType.SUCCESS +) +session.add(new_notification) +session.commit() + +# Query unread notifications +unread_notifications = session.query(Notification).filter( + Notification.read_status == False +).all() +``` + +### Using RESTful APIs for feature integration ```python -import logging +import requests -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) +# Create notification via API +notification_data = { + "user_id": 1, + "message": "Feature successfully enabled!", + "type": "success" +} -try: - result = await agent.ainvoke({"messages": [{"role": "user", "content": message}]}) - logger.info("Agent execution completed successfully") -except Exception as e: - logger.error(f"Agent execution failed: {str(e)}") - # Handle specific error cases +response = requests.post("https://api.gibsonai.com/v1/-/notifications", json=notification_data) +new_notification = response.json() + +# Get unread notifications +response = requests.get("https://api.gibsonai.com/v1/-/notifications?read_status=false") +unread = response.json() ``` -### Multiple Project Management +## Feature rollout strategies -Create agents that can work with multiple GibsonAI projects: +### Gradual rollout monitoring -```python -async def run_multi_project_agent(message: str, project_id: str = None) -> None: - """Run agent with specific project context""" - if project_id: - message = f"Working with project {project_id}: {message}" - - agent = GibsonAIAgent() - await agent.run_agent(message) +Use text-to-SQL to monitor feature adoption: + +```sql +-- Generated from: "Show feature adoption rate by day" +SELECT + DATE(created_at) as rollout_date, + COUNT(*) as notifications_sent, + COUNT(CASE WHEN read_status = true THEN 1 END) as read_notifications +FROM notifications +WHERE type = 'success' +AND message LIKE '%feature%' +GROUP BY DATE(created_at) +ORDER BY rollout_date; ``` -## Why LangChain + GibsonAI? +### Feature performance analysis -- **Tool Integration:** LangChain's MCP adapters seamlessly connect to GibsonAI's database tools. -- **Reasoning:** LangGraph's ReAct pattern provides intelligent planning and execution. -- **Flexibility:** Easy to extend with additional LangChain tools and chains. -- **Observability:** Built-in logging and debugging capabilities. -- **Production Ready:** Robust error handling and async support. +```sql +-- Generated from: "Find users most engaged with notifications" +SELECT + u.id, + u.username, + COUNT(n.id) as total_notifications, + COUNT(CASE WHEN n.read_status = true THEN 1 END) as read_notifications, + ROUND(COUNT(CASE WHEN n.read_status = true THEN 1 END) / COUNT(n.id) * 100, 2) as read_percentage +FROM users u +LEFT JOIN notifications n ON u.id = n.user_id +GROUP BY u.id, u.username +ORDER BY read_percentage DESC; +``` -### Key Advantages Over Traditional Approaches +## Access your feature data -- **No Complex Prompting:** Skip writing lengthy system prompts to teach your agent SQL operations. GibsonAI's MCP tools handle database interactions automatically, so your agent knows exactly how to create tables, run queries, and manage schemas without custom instruction engineering. +Integration options: -- **No Custom Tool Development:** Forget building your own database connection tools or SQL execution wrappers. GibsonAI provides pre-built MCP tools that work out-of-the-box with any LangChain agent. +- **RESTful APIs**: Base URL `https://api.gibsonai.com` + - SQL queries: `/v1/-/query` + - Table operations: `/v1/-/[table-name-in-kebab-case]` +- **OpenAPI Spec**: Available in your project settings +- **Direct Connection**: Connection string available in the UI +- **API Documentation**: Available in the data API section -- **Unified Database Support:** No need to manage separate MCP servers for different databases. GibsonAI handles MySQL today and PostgreSQL support is coming in the next two weeks - all through the same simple interface. +## Use cases -- **Avoid LangChain SQL Toolkit Issues:** LangChain's built-in SQL database toolkit has known limitations with complex queries, connection management, and error handling. GibsonAI's MCP tools provide a more reliable alternative with better error messages and query optimization. + -- **Sandboxed Database Environment:** Your agent can safely run SQL queries in isolated database environments without affecting production data. Each project gets its own secure sandbox, perfect for development and testing. +Schema versioning -## Next Steps +Unified API layer + +Schema migrations + + + +## What's next? + + -- Explore the [GibsonAI MCP Server documentation](https://docs.gibsonai.com/ai/mcp-server) for advanced features. -- Learn about [LangGraph patterns](https://langchain-ai.github.io/langgraph/) for complex workflows. -- Check out [LangChain's tool ecosystem](https://python.langchain.com/docs/integrations/tools/) for additional capabilities. --- -title: How to create a SQL Agent with Agno and GibsonAI -subtitle: Step-by-step guide on how to create a SQL Agent with Agno and GibsonAI +title: Provision a database for rapid development +subtitle: Rapidly spin up MySQL databases for prototyping, hackathons, and experimentation with Python models enableTableOfContents: true -updatedOn: '2025-07-28T22:31:52.611Z' +updatedOn: '2025-01-08T00:00:00.000Z' --- -This guide will show you how to build a SQL Agent that can **create, modify, and manage databases** using **[GibsonAI MCP Server](https://docs.gibsonai.com/ai/mcp-server)** and **[Agno](https://www.agno.com?utm_source=gibsonai&utm_medium=partner-docs&utm_campaign=partner-technical&utm_content=sql-agent-gibsonai-guide)**. - -## What You’ll Build - -- A **SQL Agent** powered by Agno that can: - - **Create new databases and tables** from natural language prompts. - - **Modify existing schemas** (add, remove, or update columns and tables). - - **Deploy schema changes** to serverless databases (e.g., MySQL). - - **Inspect and query database schemas** with conversational commands. +Rapidly spin up MySQL databases for prototyping, hackathons, or experimentation with Python models. Get a full-featured database environment with APIs, Pydantic schemas, and SQLAlchemy models in seconds without configuration overhead. Perfect for developers who need to move fast and iterate quickly. -## Key Concepts +## How it works -- **GibsonAI MCP Server:** Turns natural language prompts into fully functional database schemas and exposes **REST APIs** for data access and CRUD operations. -- **From Prompt to Database:** You can go from describing a database in plain English to having a running schema with deployed APIs in minutes. -- **Serverless Data APIs:** Once your schema is created, GibsonAI provides instant endpoints (e.g., `/query` for SQL operations or `/{tablename}` for CRUD). +GibsonAI provides instant database provisioning with zero configuration. Simply describe what you need in natural language, and get a fully functional MySQL database with RESTful APIs, Pydantic schemas, SQLAlchemy models, and hosting ready to use immediately. -> The **GibsonAI MCP integration in Agno** is available in the Agno repo: [GibsonAI MCP Toolkit – agno/cookbook/tools/mcp/gibsonai.py](https://github.com/agno-agi/agno/blob/main/cookbook/tools/mcp/gibsonai.py) -> + -## Prerequisites +Instant provisioning -Before starting, ensure you have: +Python models -1. **A GibsonAI account** – Sign up at [https://app.gibsonai.com](https://app.gibsonai.com/). -2. **Python 3.9+** installed. -3. **OpenAI API key** (you can get one from [OpenAI](https://platform.openai.com/)). +Zero configuration - +Text-to-SQL -## Install UV Package Manager + -[UV](https://docs.astral.sh/uv/) is needed to run GibsonAI CLI. +## Key Features -Run: +### Instant Database Creation -```bash -curl -LsSf https://astral.sh/uv/install.sh | sh -``` +- **Zero Setup**: No configuration files or infrastructure setup required +- **MySQL Database**: Fully managed MySQL database with autoscaling +- **RESTful APIs**: Complete CRUD operations automatically generated +- **Immediate Availability**: Database and APIs available instantly -## Install GibsonAI CLI +### Python Development Ready -The GibsonAI CLI lets you log in and manage projects: +- **Pydantic Schemas**: Type-safe validation schemas for all tables +- **SQLAlchemy Models**: ORM models for database operations +- **Code Generation**: Automatically generated Python code for integration +- **Framework Support**: Compatible with FastAPI, Flask, Django, and more -```bash -uvx --from gibson-cli@latest gibson auth login -``` +### Text-to-SQL Analysis -Log in with your GibsonAI account. +- **Natural Language Queries**: Ask questions about your data +- **Gibson Studio**: Run generated SQL queries in the intuitive data management UI +- **Rapid Prototyping**: Test data structures and queries quickly +- **Real-time Insights**: Get immediate feedback on your data models -## Install Python Dependencies +## Step-by-step guide -Install Agno, MCP, and OpenAI libraries: +### 1. Create your database with natural language ```bash -pip install agno mcp openai +# Create a simple app database +gibson modify "Create a simple todo app with users and tasks. Users can create multiple tasks with title, description, due date, and completion status" ``` -## Set Your OpenAI API Key - -Export your API key: +### 2. Generate Python models ```bash +# Generate Pydantic schemas for validation +gibson code schemas -export OPENAI_API_KEY="your_openai_api_key" -``` - -*(Replace `your_openai_api_key` with your real key.)* +# Generate SQLAlchemy models for database operations +gibson code models -## Create a Python File +# Generate all Python code +gibson code base +``` -Create a new Python file (e.g., `sql_agent.py`) and copy this code: +### 3. Start building immediately -```python -import asyncio -from textwrap import dedent +Your database is ready with: -from agno.agent import Agent -from agno.models.openai import OpenAIChat -from agno.tools.mcp import MCPTools +- **RESTful APIs**: Base URL `https://api.gibsonai.com` + - SQL queries: `/v1/-/query` + - Table operations: `/v1/-/[table-name-in-kebab-case]` +- **OpenAPI Spec**: Available in your project settings +- **Direct Connection**: Connection string available in the UI +- **API Documentation**: Available in the data API section -async def run_gibsonai_agent(message: str) -> None: - """Run the GibsonAI SQL Agent with the given message.""" - async with MCPTools( - "uvx --from gibson-cli@latest gibson mcp run", - timeout_seconds=300, # Longer timeout for database operations - ) as mcp_tools: - agent = Agent( - name="GibsonAIAgent", - model=OpenAIChat(id="gpt-4o"), - tools=[mcp_tools], - description="SQL Agent for managing database projects and schemas", - instructions=dedent("""\ - You are a GibsonAI database assistant. - Help users manage databases and schemas by creating tables, - updating columns, and deploying schema changes. - """), - markdown=True, - show_tool_calls=True, - ) +### 4. Explore with text-to-SQL - await agent.aprint_response(message, stream=True) +Use Gibson Studio to test your ideas: -# Example usage -if __name__ == "__main__": - asyncio.run( - run_gibsonai_agent( - "Create a database for a blog with users and posts tables." - ) - ) -``` +- "Show me all completed tasks" +- "Which users have the most tasks?" +- "Find overdue tasks" +- "Show task completion rates by user" -## Run the Agent +## Example rapid development workflow -Run the script: +### Quick prototype creation ```bash -python sql_agent.py +# Create a social media prototype +gibson modify "Create a social media prototype with users, posts, likes, and follows. Users can create posts, like posts, and follow other users" ``` -The agent will: +### Generated database schema -- Start the **GibsonAI MCP Server**. -- Take your prompt (e.g., "Create a database for a blog with users and posts tables"). -- Automatically create a database schema. +```sql +-- Users table +CREATE TABLE users ( + id INT AUTO_INCREMENT PRIMARY KEY, + username VARCHAR(50) UNIQUE NOT NULL, + email VARCHAR(255) UNIQUE NOT NULL, + display_name VARCHAR(100), + bio TEXT, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP +); -## View Your Database +-- Posts table +CREATE TABLE posts ( + id INT AUTO_INCREMENT PRIMARY KEY, + user_id INT NOT NULL, + content TEXT NOT NULL, + image_url VARCHAR(500), + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, + FOREIGN KEY (user_id) REFERENCES users(id) +); -Go to your **GibsonAI Dashboard**: +-- Likes table +CREATE TABLE likes ( + id INT AUTO_INCREMENT PRIMARY KEY, + user_id INT NOT NULL, + post_id INT NOT NULL, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, + FOREIGN KEY (user_id) REFERENCES users(id), + FOREIGN KEY (post_id) REFERENCES posts(id), + UNIQUE KEY unique_user_post (user_id, post_id) +); -[https://app.gibsonai.com](https://app.gibsonai.com/) +-- Follows table +CREATE TABLE follows ( + id INT AUTO_INCREMENT PRIMARY KEY, + follower_id INT NOT NULL, + following_id INT NOT NULL, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, + FOREIGN KEY (follower_id) REFERENCES users(id), + FOREIGN KEY (following_id) REFERENCES users(id), + UNIQUE KEY unique_follow (follower_id, following_id) +); +``` -Here, you can: +### Generated Pydantic schemas -- See your database schema. -- Check generated REST APIs for your data. +```python +from pydantic import BaseModel, EmailStr +from datetime import datetime +from typing import Optional - +class User(BaseModel): + id: Optional[int] = None + username: str + email: EmailStr + display_name: Optional[str] = None + bio: Optional[str] = None + created_at: Optional[datetime] = None -## Example Prompts to Try +class Post(BaseModel): + id: Optional[int] = None + user_id: int + content: str + image_url: Optional[str] = None + created_at: Optional[datetime] = None -You can experiment with: +class Like(BaseModel): + id: Optional[int] = None + user_id: int + post_id: int + created_at: Optional[datetime] = None -- **"Show me the current schema for my project."** -- **"Add a 'products' table with name, price, and description."** -- **"Deploy schema changes to production."** -- **"Create a new database for a task management app."** +class Follow(BaseModel): + id: Optional[int] = None + follower_id: int + following_id: int + created_at: Optional[datetime] = None +``` - +### Generated SQLAlchemy models ---- -title: How to Create an AI Agent for SQL Queries with CrewAI and GibsonAI -subtitle: Step-by-step guide on how to create a AI Agent for SQL Queries with CrewAI and GibsonAI -enableTableOfContents: true -updatedOn: '2025-07-28T22:31:52.611Z' ---- +```python +from sqlalchemy import Column, Integer, String, Text, DateTime, ForeignKey, UniqueConstraint +from sqlalchemy.ext.declarative import declarative_base +from sqlalchemy.orm import relationship +from datetime import datetime -This guide explains how to build an AI Agent using **CrewAI** for orchestrating SQL queries and **GibsonAI** for handling data storage and CRUD operations via its **Data API**. +Base = declarative_base() -## What You’ll Build +class User(Base): + __tablename__ = 'users' + + id = Column(Integer, primary_key=True, autoincrement=True) + username = Column(String(50), unique=True, nullable=False) + email = Column(String(255), unique=True, nullable=False) + display_name = Column(String(100)) + bio = Column(Text) + created_at = Column(DateTime, default=datetime.utcnow) + + posts = relationship("Post", back_populates="user") + likes = relationship("Like", back_populates="user") + followers = relationship("Follow", foreign_keys="Follow.following_id", back_populates="following") + following = relationship("Follow", foreign_keys="Follow.follower_id", back_populates="follower") -- A **CrewAI Agent** that uses the **GibsonAI Data API** to read and write data. -- You will define tables in GibsonAI, and CrewAI will use its API to **query or insert records**. -- The example provided demonstrates **storing sales contact information** in GibsonAI. +class Post(Base): + __tablename__ = 'posts' -## Key Concept + id = Column(Integer, primary_key=True, autoincrement=True) + user_id = Column(Integer, ForeignKey('users.id'), nullable=False) + content = Column(Text, nullable=False) + image_url = Column(String(500)) + created_at = Column(DateTime, default=datetime.utcnow) -- **GibsonAI exposes a REST Data API** for all created tables. -- **CrewAI can query and perform CRUD operations** directly via this API, making it a powerful backend for AI agents. -- The ability to execute **SQL queries via GibsonAI’s `/query` endpoint**. + user = relationship("User", back_populates="posts") + likes = relationship("Like", back_populates="post") -**GitHub Repo Link:** [Sales Contact Finder (CrewAI + GibsonAI)](https://github.com/GibsonAI/awesome-gibson/tree/main/sales_contact_finder) +class Like(Base): + __tablename__ = 'likes' -## Prerequisites + id = Column(Integer, primary_key=True, autoincrement=True) + user_id = Column(Integer, ForeignKey('users.id'), nullable=False) + post_id = Column(Integer, ForeignKey('posts.id'), nullable=False) + created_at = Column(DateTime, default=datetime.utcnow) -Before you begin, ensure you have: + __table_args__ = (UniqueConstraint('user_id', 'post_id', name='unique_user_post'),) -1. **A GibsonAI Account** – Sign up at [https://app.gibsonai.com](https://app.gibsonai.com/). -2. **A GibsonAI API Key** – Create a project in GibsonAI and copy the API key from the **Connect tab**. -3. **Python 3.9+** installed. -4. **OpenAI API Key** – [Get one here](https://platform.openai.com/). -5. **Serper.dev API Key** (if using web scraping/search features). + user = relationship("User", back_populates="likes") + post = relationship("Post", back_populates="likes") - +class Follow(Base): + __tablename__ = 'follows' -## Generate Your Database Schema in GibsonAI + id = Column(Integer, primary_key=True, autoincrement=True) + follower_id = Column(Integer, ForeignKey('users.id'), nullable=False) + following_id = Column(Integer, ForeignKey('users.id'), nullable=False) + created_at = Column(DateTime, default=datetime.utcnow) -Use the following prompt in GibsonAI to create the schema: + __table_args__ = (UniqueConstraint('follower_id', 'following_id', name='unique_follow'),) -```bash -I want to create a sales contact aggregator agent. -Generate a “sales_contact” table with fields (company_id, name, title, linkedin_url, phone, email). -Also create a “sales_company” table with fields (name). -All string fields, except name, are nullable. + follower = relationship("User", foreign_keys=[follower_id], back_populates="following") + following = relationship("User", foreign_keys=[following_id], back_populates="followers") ``` -Click **Deploy** and copy the **API Key**. +## Rapid development examples ---- +### Start coding immediately -## Clone the Sales Contact Finder Example +```python +import requests -This example lives in the **awesome-gibson** repo. Clone it: +# Create a user +user_data = { + "username": "developer123", + "email": "dev@example.com", + "display_name": "Developer", + "bio": "Building awesome stuff!" +} +response = requests.post("https://api.gibsonai.com/v1/-/users", json=user_data) +new_user = response.json() -```bash -git clone https://github.com/GibsonAI/awesome-gibson.git -cd awesome-gibson/sales_contact_finder +# Create a post +post_data = { + "user_id": new_user["id"], + "content": "Just provisioned a database in seconds with GibsonAI!", + "image_url": "https://example.com/screenshot.png" +} +response = requests.post("https://api.gibsonai.com/v1/-/posts", json=post_data) +new_post = response.json() + +# Like the post +like_data = { + "user_id": new_user["id"], + "post_id": new_post["id"] +} +response = requests.post("https://api.gibsonai.com/v1/-/likes", json=like_data) ``` -## Configure Your Environment +### Using SQLAlchemy for complex operations -Copy and edit the `.env` file: +```python +from sqlalchemy import create_engine, func +from sqlalchemy.orm import sessionmaker -```bash -cp .env.example .env -``` +# Use connection string from GibsonAI UI +engine = create_engine("your-connection-string-from-ui") +Session = sessionmaker(bind=engine) +session = Session() -Fill in: +# Get popular posts +popular_posts = session.query( + Post.content, + User.username, + func.count(Like.id).label('like_count') +).join(User).outerjoin(Like).group_by( + Post.id +).order_by( + func.count(Like.id).desc() +).limit(10).all() -```bash -GIBSONAI_API_KEY=your_project_api_key -SERPER_API_KEY=your_serper_api_key -OPENAI_API_KEY=your_openai_api_key +# Get user feed with followed users' posts +user_id = 1 +feed = session.query(Post).join( + Follow, Post.user_id == Follow.following_id +).filter( + Follow.follower_id == user_id +).order_by(Post.created_at.desc()).limit(20).all() ``` ---- +### Text-to-SQL for rapid analysis -## Create and Activate Virtual Environment +```python +# Use text-to-SQL for quick insights +query = """ +SELECT u.username, COUNT(p.id) as post_count, COUNT(l.id) as total_likes +FROM users u +LEFT JOIN posts p ON u.id = p.user_id +LEFT JOIN likes l ON p.id = l.post_id +GROUP BY u.id, u.username +ORDER BY total_likes DESC +LIMIT 10 +""" -```bash -source .venv/bin/activate # For Windows: .venv\Scripts\activate +response = requests.post("https://api.gibsonai.com/v1/-/query", json={"query": query}) +top_users = response.json() ``` ---- +## Common use cases -## Install Dependencies +### Hackathon projects ```bash -uv pip sync pyproject.toml +# Create a voting app for hackathons +gibson modify "Create a voting app with users, proposals, and votes. Users can submit proposals and vote on them. Each user can only vote once per proposal" ``` ---- - -## Implement CrewAI Tool for SQL Operations - -CrewAI will communicate with GibsonAI’s **Data API** for CRUD operations. Below is an example **ContactStorageTool**: - -```python -import json -import os -import requests -from dotenv import load_dotenv -from pydantic import Field -from crewai.tools import BaseTool - -load_dotenv() # Load environment variables from .env - -class ContactStorageTool(BaseTool): - name: str = "ContactStorageTool" - description: str = """ - Saves contact information in a GibsonAI database using the hosted API. - Expected payload format: - {"company_name": "Company Name", "contacts": [{"name": "Name", "title": "Title", - "linkedin_url": "LinkedIn URL", "phone": "Phone", "email": "Email"}]} - """ - - api_base_url: str = Field(description="The base URL of the GibsonAI API") - api_key: str = Field(description="The API key associated with your GibsonAI project") +### MVP development - def __init__(self): - self.api_base_url = "https://api.gibsonai.com/v1/-" - self.api_key = os.getenv("GIBSONAI_API_KEY") - if not self.api_key: - raise ValueError("Missing GIBSONAI_API_KEY environment variable") - super().__init__() +```bash +# Create an e-commerce MVP +gibson modify "Create an e-commerce MVP with products, users, orders, and reviews. Users can browse products, place orders, and leave reviews" +``` - def _run(self, contact_info: str) -> str: - try: - contact_data = json.loads(contact_info) if isinstance(contact_info, str) else contact_info - company_name = contact_data["company_name"] - contacts = contact_data["contacts"] +### Learning projects - # Insert company - company_payload = {"name": company_name} - response = requests.post( - f"{self.api_base_url}/sales-company", - json=company_payload, - headers={"X-Gibson-API-Key": self.api_key}, - ) - response.raise_for_status() - company_id = response.json()["id"] - print(f"Posted company: {response.status_code}") +```bash +# Create a learning management system +gibson modify "Create a learning platform with students, courses, lessons, and progress tracking. Students can enroll in courses and track their progress" +``` - # Insert contacts - for contact in contacts: - contact_payload = { - "company_id": company_id, - "name": contact["name"], - "title": contact["title"], - "linkedin_url": contact["linkedin_url"], - "phone": contact["phone"], - "email": contact["email"], - } - response = requests.post( - f"{self.api_base_url}/sales-contact", - json=contact_payload, - headers={"X-Gibson-API-Key": self.api_key}, - ) - print(f"Posted contact {contact['name']}: {response.status_code}") - except Exception as e: - return f"Failed to post contact: {str(e)}" +## Performance and scalability -``` +### Instant scaling ---- +- **Autoscaling**: Database automatically scales with your application +- **Zero downtime**: No interruptions as your app grows +- **Global availability**: Hosted on scalable cloud infrastructure +- **Performance monitoring**: Built-in performance tracking -## Run Your Crew +### Development to production -Run: +- **Environment promotion**: Easily move from development to production +- **Zero configuration**: No infrastructure changes needed +- **Continuous deployment**: Deploy changes instantly +- **Backup and recovery**: Automatic backups and point-in-time recovery -```bash -python main.py run -``` +## Use cases -The crew will: + -- Gather data (e.g., sales contacts). -- Use the **GibsonAI Data API** to store the results. +AI-driven schema generation -## Check Your Data +Unified API layer -Go to the [GibsonAI Dashboard](https://app.gibsonai.com/) to see: +Feature development -- **Sales Company** and **Sales Contact** tables. -- The data stored by the CrewAI agent. + - +## What's next? - \ No newline at end of file +