From a913753fce45e336f1b2833d2d12a0d918f684f6 Mon Sep 17 00:00:00 2001 From: perinim Date: Sat, 4 Oct 2025 23:47:22 +0200 Subject: [PATCH 1/7] chore: improved actions --- .github/scripts/README.md | 212 +++++++++++++++++ .github/scripts/langgraph_api.py | 283 +++++++++++++++++++---- .github/scripts/list_deployments.py | 44 ---- .github/scripts/test_langgraph_api.py | 106 --------- .github/workflows/DEPLOYMENT_PIPELINE.md | 97 -------- .github/workflows/README.md | 139 +++++++---- .github/workflows/new-lgp-revision.yml | 11 +- .github/workflows/preview-deployment.yml | 6 +- 8 files changed, 557 insertions(+), 341 deletions(-) create mode 100644 .github/scripts/README.md delete mode 100644 .github/scripts/list_deployments.py delete mode 100644 .github/scripts/test_langgraph_api.py delete mode 100644 .github/workflows/DEPLOYMENT_PIPELINE.md diff --git a/.github/scripts/README.md b/.github/scripts/README.md new file mode 100644 index 0000000..9e88727 --- /dev/null +++ b/.github/scripts/README.md @@ -0,0 +1,212 @@ +# GitHub Actions Scripts + +This directory contains Python scripts used by the GitHub Actions workflows for deployment management, status reporting, and evaluation processing. + +## Scripts Overview + +### 1. `langgraph_api.py` +**Purpose:** LangGraph API client for deployment management + +**Features:** +- **List Deployments:** Find existing deployments by name pattern +- **Create Deployment:** Create new preview or production deployments +- **Update Deployment:** Update existing deployments with new Docker images +- **Delete Deployment:** Clean up preview deployments + +**Usage:** +```bash +# Deploy preview for PR +python langgraph_api.py --action deploy-preview --pr-number 123 --image-uri docker.io/user/repo:preview-123 --api-key $LANGSMITH_API_KEY + +# Deploy to production +python langgraph_api.py --action deploy-production --image-uri docker.io/user/repo:latest --api-key $LANGSMITH_API_KEY + +# Cleanup preview deployment +python langgraph_api.py --action cleanup-preview --pr-number 123 --api-key $LANGSMITH_API_KEY + +# Multiple secrets (direct values) +python langgraph_api.py --action deploy-preview --pr-number 123 --image-uri docker.io/user/repo:preview-123 --api-key $LANGSMITH_API_KEY --secrets OPENAI_API_KEY=sk-xxx ANTHROPIC_API_KEY=sk-xxx DATABASE_URL=postgres://... + +# Multiple secrets (from environment) +python langgraph_api.py --action deploy-preview --pr-number 123 --image-uri docker.io/user/repo:preview-123 --api-key $LANGSMITH_API_KEY --secrets-from-env OPENAI_API_KEY ANTHROPIC_API_KEY DATABASE_URL REDIS_URL +``` + +**Key Functions:** +- `deploy_preview()`: Creates or updates preview deployments for PRs +- `deploy_production()`: Creates or updates production deployment +- `cleanup_preview()`: Removes preview deployments when PRs are closed + +### 2. `report_deployment.py` +**Purpose:** Generate deployment status reports for PR comments + +**Features:** +- **Status Monitoring:** Check deployment and revision status +- **Markdown Reports:** Generate formatted status reports +- **PR Integration:** Create deployment status comments +- **Status Emojis:** Visual status indicators for different states + +**Usage:** +```bash +# Generate preview deployment report +python report_deployment.py --deployment-name text2sql-agent-pr-123 --image-uri docker.io/user/repo:preview-123 --deployment-type preview + +# Generate production deployment report +python report_deployment.py --deployment-name text2sql-agent-prod --image-uri docker.io/user/repo:latest --deployment-type production +``` + +**Status Indicators:** +- `โณ AWAITING_DATABASE`: Deployment being set up +- `โœ… READY`: Deployment is ready and accessible +- `๐Ÿ”จ BUILDING`: Docker image is being built +- `๐Ÿš€ DEPLOYING`: Deployment in progress +- `โŒ FAILED`: Deployment failed + +### 3. `report_eval.py` +**Purpose:** Process LangSmith evaluation results and generate test reports + +**Features:** +- **Evaluation Processing:** Parse LangSmith experiment results +- **Threshold Checking:** Validate scores against criteria +- **Markdown Reports:** Generate formatted evaluation reports +- **PR Integration:** Create evaluation result comments + +**Usage:** +```bash +# Process all evaluation configs +python report_eval.py + +# Process specific config files +python report_eval.py evaluation_config__*.json + +# Process single config +python report_eval.py evaluation_config__my_experiment.json +``` + +**Configuration Format:** +```json +{ + "experiment_name": "my-experiment", + "criteria": { + "accuracy": ">=0.8", + "response_time": "<2.0", + "quality": ">=0.9" + } +} +``` + +## Integration with Workflows + +### Preview Deployment Flow +1. **`preview-deployment.yml`** calls `langgraph_api.py` to deploy preview +2. **`report_deployment.py`** generates status report +3. **GitHub Action** posts report as PR comment + +### Production Deployment Flow +1. **`new-lgp-revision.yml`** calls `langgraph_api.py` to deploy production +2. **`report_deployment.py`** generates status report +3. **GitHub Action** posts report as PR comment + +### Evaluation Flow +1. **`test-with-results.yml`** runs evaluation tests +2. **`report_eval.py`** processes results from LangSmith +3. **GitHub Action** posts evaluation report as PR comment + +## Environment Variables + +### GitHub Actions Context +In GitHub Actions, secrets are passed directly via command line arguments: +```bash +python langgraph_api.py \ + --secrets OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }} \ + --api-key ${{ secrets.LANGSMITH_API_KEY }} +``` + +### Local Development Context +For local development, you can use environment variables: +```bash +# Set environment variables +export LANGSMITH_API_KEY=your_key_here +export OPENAI_API_KEY=your_key_here + +# Use with --secrets-from-env +python langgraph_api.py \ + --secrets-from-env OPENAI_API_KEY \ + --api-key $LANGSMITH_API_KEY +``` + +### Required Variables +- `LANGSMITH_API_KEY`: LangSmith API key for authentication +- `LANGSMITH_TRACING`: LangSmith tracing configuration (optional) +- `LANGSMITH_ENDPOINT`: LangSmith endpoint URL (optional) +- `OPENAI_API_KEY`: OpenAI API key (for deployment secrets) + +## Error Handling + +### Common Issues + +1. **API Authentication Failures** + - Verify `LANGSMITH_API_KEY` is correct + - Check API endpoint URL configuration + +2. **Deployment Not Found** + - Ensure deployment name matches exactly + - Check if deployment was created successfully + +3. **Evaluation Config Errors** + - Verify JSON format in config files + - Check experiment names exist in LangSmith + +4. **Docker Image Issues** + - Verify image URI format + - Check Docker registry permissions + +## Development + +### Adding New Scripts + +When creating new scripts: + +1. **Follow the established pattern:** + - Use argparse for CLI interface + - Include comprehensive error handling + - Add verbose output options + - Generate markdown reports for PR comments + +2. **Environment Setup:** + - Use environment variables for configuration + - Include proper error messages for missing variables + - Support both direct execution and GitHub Actions + +3. **Documentation:** + - Add docstrings for all functions + - Include usage examples in help text + - Update this README with new script information + +### Testing Scripts + +```bash +# Test API connectivity +python langgraph_api.py --action list-deployments --api-key $LANGSMITH_API_KEY + +# Test deployment reporting +python report_deployment.py --deployment-name test-deployment --image-uri test:latest --deployment-type preview + +# Test evaluation reporting +python report_eval.py --verbose +``` + +## File Dependencies + +- **`langgraph_api.py`**: Standalone, requires `requests` +- **`report_deployment.py`**: Standalone, requires `requests` +- **`report_eval.py`**: Requires `langsmith` package + +## Contributing + +When modifying scripts: + +1. **Maintain backward compatibility** for existing workflow usage +2. **Add comprehensive error handling** for edge cases +3. **Update documentation** to reflect changes +4. **Test with both local execution and GitHub Actions** +5. **Follow the established code style and patterns** diff --git a/.github/scripts/langgraph_api.py b/.github/scripts/langgraph_api.py index adb0446..f71780c 100755 --- a/.github/scripts/langgraph_api.py +++ b/.github/scripts/langgraph_api.py @@ -1,13 +1,14 @@ #!/usr/bin/env python3 """ -LangGraph API helper script for deployment management. -Handles preview deployments, production deployments, and cleanup. +Generic LangGraph API helper script for deployment management. +Handles preview deployments, production deployments, and cleanup with configurable parameters. """ import argparse +import json import os import sys -from typing import Any, Dict, Optional +from typing import Any, Dict, List, Optional import requests @@ -15,10 +16,10 @@ class LangGraphAPI: """LangGraph API client for deployment management.""" - def __init__(self, api_key: str): + def __init__(self, api_key: str, base_url: Optional[str] = None): self.api_key = api_key - # Use the same endpoint as the working script - self.base_url = "https://gtm.smith.langchain.dev/api-host/v2" + # Default to the standard LangChain hosted API, but allow override + self.base_url = base_url or "https://api.host.langchain.com/v2" # Fix header name to match working script self.headers = {"X-Api-Key": api_key, "Content-Type": "application/json"} @@ -40,15 +41,23 @@ def list_deployments(self, name_contains: Optional[str] = None) -> Dict[str, Any sys.exit(1) def create_deployment( - self, name: str, image_uri: str, openai_api_key: Optional[str] = None + self, + name: str, + image_uri: str, + secrets: List[Dict[str, str]] = None, + resource_spec: Optional[Dict[str, Any]] = None, ) -> Dict[str, Any]: """Create a new deployment.""" - # Get OpenAI API key from environment if not provided - if openai_api_key is None: - openai_api_key = os.environ.get("OPENAI_API_KEY") - if not openai_api_key: - print("โŒ OPENAI_API_KEY not found in environment variables") - sys.exit(1) + if secrets is None: + secrets = [] + + if resource_spec is None: + resource_spec = { + "min_scale": 1, + "max_scale": 1, + "cpu": 1, + "memory_mb": 1024, + } # Match the working script structure exactly, but for external_docker request_body = { @@ -60,24 +69,14 @@ def create_deployment( "deployment_type": None, "build_on_push": None, "custom_url": None, - "resource_spec": { - "min_scale": 1, - "max_scale": 1, - "cpu": 1, - "memory_mb": 1024, - }, + "resource_spec": resource_spec, }, "source_revision_config": { "repo_ref": None, "langgraph_config_path": None, "image_uri": image_uri, }, - "secrets": [ - { - "name": "OPENAI_API_KEY", - "value": openai_api_key, - } - ], + "secrets": secrets, } print(f"๐Ÿ“ค Sending deployment request to: {self.base_url}/deployments") @@ -156,9 +155,56 @@ def find_deployment_by_name(self, name_contains: str) -> Optional[Dict[str, Any] return None -def deploy_preview(api: LangGraphAPI, pr_number: int, image_uri: str): +def parse_secrets( + secrets_args: List[str], secrets_from_env: List[str] +) -> List[Dict[str, str]]: + """Parse secrets from command line arguments and environment variables.""" + secrets = [] + + # Parse secrets from command line (format: KEY=VALUE) + for secret in secrets_args: + if "=" in secret: + key, value = secret.split("=", 1) + secrets.append({"name": key, "value": value}) + else: + print(f"โš ๏ธ Warning: Secret '{secret}' should be in format KEY=VALUE") + + # Parse secrets from environment variables + for env_var in secrets_from_env: + value = os.environ.get(env_var) + if value: + secrets.append({"name": env_var, "value": value}) + else: + print(f"โš ๏ธ Warning: Environment variable '{env_var}' not found") + + return secrets + + +def load_config(config_path: str) -> Dict[str, Any]: + """Load configuration from JSON file.""" + try: + with open(config_path, "r") as f: + config = json.load(f) + return config + except (FileNotFoundError, json.JSONDecodeError) as e: + print(f"โŒ Failed to load config file {config_path}: {e}") + sys.exit(1) + + +def deploy_preview( + api: LangGraphAPI, + pr_number: int, + image_uri: str, + app_name: str, + secrets: List[Dict[str, str]], + resource_spec: Dict[str, Any], + domain: str, + protocol: str, + deployment_name: str = None, +): """Deploy or update a preview deployment.""" - deployment_name = f"text2sql-agent-pr-{pr_number}" + if deployment_name is None: + deployment_name = f"{app_name}-pr-{pr_number}" print(f"๐Ÿ” Looking for existing preview deployment: {deployment_name}") @@ -172,21 +218,26 @@ def deploy_preview(api: LangGraphAPI, pr_number: int, image_uri: str): result = api.update_deployment(existing_deployment["id"], image_uri) print("โœ… Preview deployment updated successfully!") print(f"๐Ÿ“ฆ Deployment ID: {result['id']}") - print(f"๐Ÿ”— URL: https://{deployment_name}.langchain.dev") + print(f"๐Ÿ”— URL: {protocol}://{deployment_name}.{domain}") else: print(f"๐Ÿ†• Creating new preview deployment: {deployment_name}") print(f"๐Ÿ“ฆ Image: {image_uri}") - result = api.create_deployment(deployment_name, image_uri) + result = api.create_deployment( + deployment_name, image_uri, secrets, resource_spec + ) print("โœ… Preview deployment created successfully!") print(f"๐Ÿ“ฆ Deployment ID: {result['id']}") - print(f"๐Ÿ”— URL: https://{deployment_name}.langchain.dev") + print(f"๐Ÿ”— URL: {protocol}://{deployment_name}.{domain}") -def cleanup_preview(api: LangGraphAPI, pr_number: int): +def cleanup_preview( + api: LangGraphAPI, pr_number: int, app_name: str, deployment_name: str = None +): """Clean up a preview deployment.""" - deployment_name = f"text2sql-agent-pr-{pr_number}" + if deployment_name is None: + deployment_name = f"{app_name}-pr-{pr_number}" print(f"๐Ÿ” Looking for preview deployment to cleanup: {deployment_name}") @@ -204,9 +255,20 @@ def cleanup_preview(api: LangGraphAPI, pr_number: int): print(f"โ„น๏ธ No preview deployment found for PR #{pr_number}") -def deploy_production(api: LangGraphAPI, image_uri: str): +def deploy_production( + api: LangGraphAPI, + image_uri: str, + app_name: str, + secrets: List[Dict[str, str]], + resource_spec: Dict[str, Any], + domain: str, + protocol: str, + production_suffix: str, + deployment_name: str = None, +): """Deploy or update production deployment.""" - deployment_name = "text2sql-agent-prod" + if deployment_name is None: + deployment_name = f"{app_name}-{production_suffix}" print(f"๐Ÿ” Looking for production deployment: {deployment_name}") @@ -219,20 +281,38 @@ def deploy_production(api: LangGraphAPI, image_uri: str): result = api.update_deployment(existing_deployment["id"], image_uri) print("โœ… Production deployment updated successfully!") print(f"๐Ÿ“ฆ Deployment ID: {result['id']}") - print(f"๐Ÿ”— URL: https://{deployment_name}.langchain.dev") + print(f"๐Ÿ”— URL: {protocol}://{deployment_name}.{domain}") else: print(f"๐Ÿ†• Creating new production deployment: {deployment_name}") print(f"๐Ÿ“ฆ Image: {image_uri}") - result = api.create_deployment(deployment_name, image_uri) + result = api.create_deployment( + deployment_name, image_uri, secrets, resource_spec + ) print("โœ… Production deployment created successfully!") print(f"๐Ÿ“ฆ Deployment ID: {result['id']}") - print(f"๐Ÿ”— URL: https://{deployment_name}.langchain.dev") + print(f"๐Ÿ”— URL: {protocol}://{deployment_name}.{domain}") def main(): - parser = argparse.ArgumentParser(description="LangGraph API deployment helper") + parser = argparse.ArgumentParser( + description="Generic LangGraph API deployment helper", + formatter_class=argparse.RawDescriptionHelpFormatter, + epilog=""" +Examples: + # Deploy preview with custom app name + python langgraph_api.py --action deploy-preview --app-name my-llm-app --pr-number 123 --image-uri docker.io/user/my-llm-app:preview-123 --api-key $LANGSMITH_API_KEY + + # Deploy with custom secrets and resources + python langgraph_api.py --action deploy-production --app-name my-llm-app --image-uri docker.io/user/my-llm-app:latest --secrets OPENAI_API_KEY=sk-xxx --secrets-from-env DATABASE_URL --min-scale 0 --max-scale 3 --cpu 2.0 --memory-mb 2048 --api-key $LANGSMITH_API_KEY + + # Use configuration file + python langgraph_api.py --config deployment-config.json --action deploy-preview --pr-number 123 --api-key $LANGSMITH_API_KEY + """, + ) + + # Core arguments parser.add_argument( "--action", required=True, @@ -240,34 +320,149 @@ def main(): help="Action to perform", ) parser.add_argument("--api-key", required=True, help="LangGraph API key") + parser.add_argument( + "--base-url", + help="LangGraph API base URL (default: https://api.host.langchain.com/v2)", + ) + + # Configuration file + parser.add_argument("--config", help="Configuration file path (JSON format)") + + # Deployment naming + parser.add_argument( + "--app-name", + default="text2sql-agent", + help="Application name (default: text2sql-agent)", + ) + parser.add_argument( + "--production-suffix", + default="prod", + help="Production deployment suffix (default: prod)", + ) + parser.add_argument( + "--deployment-name", + help="Specific deployment name (overrides naming convention)", + ) + + # Preview deployment arguments parser.add_argument("--pr-number", type=int, help="PR number (for preview actions)") parser.add_argument("--image-uri", help="Docker image URI") + + # Secrets management parser.add_argument( - "--openai-api-key", - help="OpenAI API key (optional, will use env var if not provided)", + "--secrets", + nargs="*", + default=[], + help="Secrets to include (format: KEY=VALUE)", + ) + parser.add_argument( + "--secrets-from-env", + nargs="*", + default=[], + help="Environment variables to use as secrets", + ) + + # Resource specifications + parser.add_argument( + "--min-scale", type=int, default=1, help="Minimum scale (default: 1)" + ) + parser.add_argument( + "--max-scale", type=int, default=1, help="Maximum scale (default: 1)" + ) + parser.add_argument( + "--cpu", type=float, default=1.0, help="CPU allocation (default: 1.0)" + ) + parser.add_argument( + "--memory-mb", type=int, default=1024, help="Memory in MB (default: 1024)" + ) + + # URL configuration + parser.add_argument( + "--domain", + default="langchain.dev", + help="Deployment domain (default: langchain.dev)", + ) + parser.add_argument( + "--protocol", default="https", help="URL protocol (default: https)" ) args = parser.parse_args() - api = LangGraphAPI(args.api_key) + # Load configuration file if provided + config = {} + if args.config: + config = load_config(args.config) + # Override args with config values (config takes precedence) + for key, value in config.items(): + if hasattr(args, key) and getattr(args, key) in [ + None, + [], + 0, + 1, + "text2sql-agent", + "prod", + "langchain.dev", + "https", + ]: + setattr(args, key, value) + + # Parse secrets + secrets = parse_secrets(args.secrets, args.secrets_from_env) + + # Build resource specification + resource_spec = { + "min_scale": args.min_scale, + "max_scale": args.max_scale, + "cpu": args.cpu, + "memory_mb": args.memory_mb, + } + + api = LangGraphAPI(args.api_key, args.base_url) if args.action == "deploy-preview": if not args.pr_number or not args.image_uri: print("โŒ PR number and image URI are required for preview deployment") sys.exit(1) - deploy_preview(api, args.pr_number, args.image_uri) + + deployment_name = args.deployment_name or f"{args.app_name}-pr-{args.pr_number}" + deploy_preview( + api, + args.pr_number, + args.image_uri, + args.app_name, + secrets, + resource_spec, + args.domain, + args.protocol, + deployment_name, + ) elif args.action == "deploy-production": if not args.image_uri: print("โŒ Image URI is required for production deployment") sys.exit(1) - deploy_production(api, args.image_uri) + + deployment_name = ( + args.deployment_name or f"{args.app_name}-{args.production_suffix}" + ) + deploy_production( + api, + args.image_uri, + args.app_name, + secrets, + resource_spec, + args.domain, + args.protocol, + args.production_suffix, + deployment_name, + ) elif args.action == "cleanup-preview": if not args.pr_number: print("โŒ PR number is required for preview cleanup") sys.exit(1) - cleanup_preview(api, args.pr_number) + deployment_name = args.deployment_name or f"{args.app_name}-pr-{args.pr_number}" + cleanup_preview(api, args.pr_number, args.app_name, deployment_name) if __name__ == "__main__": diff --git a/.github/scripts/list_deployments.py b/.github/scripts/list_deployments.py deleted file mode 100644 index def0a55..0000000 --- a/.github/scripts/list_deployments.py +++ /dev/null @@ -1,44 +0,0 @@ -#!/usr/bin/env python3 -""" -Simple script to list all deployments for testing. -""" - -import os -import sys - -from langgraph_api import LangGraphAPI - - -def main(): - """List all deployments.""" - api_key = os.environ.get("LANGSMITH_API_KEY") - if not api_key: - print("โŒ LANGSMITH_API_KEY not found in environment") - sys.exit(1) - - api = LangGraphAPI(api_key) - - try: - deployments = api.list_deployments() - print(f"๐Ÿ“‹ Found {len(deployments.get('resources', []))} deployments:") - print() - - for deployment in deployments.get("resources", []): - name = deployment.get("name", "Unknown") - status = deployment.get("status", "Unknown") - deployment_id = deployment.get("id", "Unknown") - created_at = deployment.get("created_at", "Unknown") - - print(f"๐Ÿ”— **{name}**") - print(f" Status: {status}") - print(f" ID: {deployment_id}") - print(f" Created: {created_at}") - print() - - except Exception as e: - print(f"โŒ Error listing deployments: {e}") - sys.exit(1) - - -if __name__ == "__main__": - main() diff --git a/.github/scripts/test_langgraph_api.py b/.github/scripts/test_langgraph_api.py deleted file mode 100644 index a34a737..0000000 --- a/.github/scripts/test_langgraph_api.py +++ /dev/null @@ -1,106 +0,0 @@ -#!/usr/bin/env python3 -""" -Test script for LangGraph API to verify connection and payload structure. -""" - -import os -import sys - -from langgraph_api import LangGraphAPI - - -def test_api_connection(): - """Test basic API connection and list deployments.""" - api_key = os.environ.get("LANGSMITH_API_KEY") - if not api_key: - print("โŒ LANGSMITH_API_KEY not found in environment") - return False - - print("๐Ÿ” Testing LangGraph API connection...") - - api = LangGraphAPI(api_key) - - try: - # Test listing deployments - deployments = api.list_deployments() - print( - f"โœ… Successfully listed {len(deployments.get('resources', []))} deployments" - ) - return True - except Exception as e: - print(f"โŒ Failed to connect to API: {e}") - return False - - -def test_payload_structure(): - """Test the payload structure without actually creating a deployment.""" - api_key = os.environ.get("LANGSMITH_API_KEY") - openai_key = os.environ.get("OPENAI_API_KEY") - - if not api_key: - print("โŒ LANGSMITH_API_KEY not found in environment") - return False - - if not openai_key: - print("โŒ OPENAI_API_KEY not found in environment") - return False - - print("๐Ÿ” Testing payload structure...") - - LangGraphAPI(api_key) - - # Create a test payload (without sending) - test_name = "test-deployment" - test_image = "docker.io/test/image:latest" - - payload = { - "name": test_name, - "source": "external_docker", - "source_config": { - "integration_id": None, - "repo_url": None, - "deployment_type": None, - "build_on_push": None, - "custom_url": None, - "resource_spec": None, - }, - "source_revision_config": { - "repo_ref": None, - "langgraph_config_path": None, - "image_uri": test_image, - }, - "secrets": [ - { - "name": "OPENAI_API_KEY", - "value": openai_key, - } - ], - } - - print("๐Ÿ“ฆ Test payload structure:") - print(f" - name: {payload['name']}") - print(f" - source: {payload['source']}") - print(f" - source_config: {payload['source_config']}") - print(f" - source_revision_config: {payload['source_revision_config']}") - print(f" - secrets: {len(payload['secrets'])} secret(s)") - - return True - - -def main(): - """Run all tests.""" - print("๐Ÿงช Testing LangGraph API...") - - # Test 1: API Connection - if not test_api_connection(): - sys.exit(1) - - # Test 2: Payload Structure - if not test_payload_structure(): - sys.exit(1) - - print("โœ… All tests passed!") - - -if __name__ == "__main__": - main() diff --git a/.github/workflows/DEPLOYMENT_PIPELINE.md b/.github/workflows/DEPLOYMENT_PIPELINE.md deleted file mode 100644 index a162ebc..0000000 --- a/.github/workflows/DEPLOYMENT_PIPELINE.md +++ /dev/null @@ -1,97 +0,0 @@ -# CI/CD Pipeline Documentation - -## Overview - -This repository uses a modern CI/CD pipeline with GitHub Actions for automated testing, preview deployments, and production deployments using LangGraph (LangChain Hosted). - -## Pipeline Structure - -### 1. Testing Pipeline (`test-with-results.yml`) -- **Trigger:** Every push to main/develop and every PR -- **Purpose:** Run comprehensive tests, linting, and quality checks -- **Jobs:** - - Quality checks (linting, formatting) - - Test coverage - - Unit tests - - Integration tests - - E2E tests - - Evaluation tests (PR only) - -### 2. Preview Deployment (`preview-deployment.yml`) -- **Trigger:** PR opened, synchronized, or reopened -- **Purpose:** Create/update preview deployments for PR testing -- **Jobs:** - - Build Docker image with tag `preview-` - - Deploy to LangGraph as preview deployment - - Update existing preview if it exists - -### 3. Production Deployment (`new-lgp-revision.yml`) -- **Trigger:** PR closed (merged or not) -- **Purpose:** Cleanup previews and deploy to production -- **Jobs:** - - Cleanup preview deployment - - Build production Docker image (only if merged) - - Deploy to production (only if merged) - -## Deployment Naming Convention - -- **Preview Deployments:** `text2sql-agent-pr-` -- **Production Deployment:** `text2sql-agent-prod` -- **Docker Images:** - - Preview: `perinim98/text2sql-agent:preview-` - - Production: `perinim98/text2sql-agent:latest` - -## URLs - -- **Preview URLs:** `https://text2sql-agent-pr-.langchain.dev` -- **Production URL:** `https://text2sql-agent-prod.langchain.dev` - -## API Integration - -The pipeline uses a custom Python script (`.github/scripts/langgraph_api.py`) to interact with the LangGraph API: - -- **List Deployments:** Find existing preview deployments -- **Create Deployment:** Create new preview/production deployments -- **Update Deployment:** Update existing deployments with new images -- **Delete Deployment:** Clean up preview deployments - -## Required Secrets - -The following GitHub secrets are required: - -- `LANGSMITH_API_KEY`: LangGraph API key -- `OPENAI_API_KEY`: OpenAI API key for the application -- `DOCKER_USERNAME`: Docker Hub username -- `DOCKER_PASSWORD`: Docker Hub password/token - -## Workflow Summary - -| Event | Action | Docker Tag | Deployment | -|-------|--------|------------|------------| -| Push to main | Run tests only | - | - | -| PR open/sync | Build & deploy preview | `preview-` | `text2sql-agent-pr-` | -| PR close | Cleanup preview | - | Delete preview | -| PR merge | Deploy to production | `latest` | `text2sql-agent-prod` | - -## Benefits - -1. **No wasteful deployments:** Only deploys on PR events, not every push -2. **Preview environments:** Each PR gets its own preview deployment -3. **Automatic cleanup:** Preview deployments are automatically removed when PRs are closed -4. **Production safety:** Production only deploys when PRs are merged -5. **Cost optimization:** Preview deployments use minimal resources (scale to 0 when not in use) - -## Troubleshooting - -### Common Issues - -1. **Preview deployment not found:** Check if the PR number is correct and the deployment was created successfully -2. **API errors:** Verify the `LANGSMITH_API_KEY` secret is correct -3. **Docker build failures:** Check the Dockerfile path and build context -4. **Production deployment fails:** Ensure the PR was actually merged, not just closed - -### Debugging - -- Check GitHub Actions logs for detailed error messages -- Verify secrets are properly configured -- Test API calls manually using the script: `python .github/scripts/langgraph_api.py --help` diff --git a/.github/workflows/README.md b/.github/workflows/README.md index 3d5ea99..5c43eec 100644 --- a/.github/workflows/README.md +++ b/.github/workflows/README.md @@ -1,46 +1,68 @@ # GitHub Actions Workflows -This directory contains the CI/CD workflows for the text2sql-agent project. The workflows are designed to provide comprehensive quality assurance and automated testing. +This directory contains the CI/CD workflows for the text2sql-agent project. The workflows provide comprehensive testing, quality assurance, and automated deployment using LangGraph (LangChain Hosted). ## Workflow Overview -### 1. Quality Checks (`quality-checks.yml`) +### 1. Comprehensive Tests (`test-with-results.yml`) **Triggers:** Push to main/develop, Pull Requests -**Purpose:** Code quality and style enforcement +**Purpose:** Complete testing pipeline with quality checks and evaluation -- **Linting:** Runs `ruff`, `black`, and `isort` checks -- **Type Checking:** Runs `mypy` for static type analysis -- **Pre-commit Hooks:** Ensures code meets pre-commit standards -- **PR Comments:** Automatically comments on PRs with results +**Jobs:** +- **Setup:** Environment setup with Python 3.11 and UV dependency management +- **Quality Checks:** Linting, formatting, and pre-commit hooks +- **Test Coverage:** Test execution with coverage reporting +- **Unit Tests:** Individual component testing +- **Integration Tests:** Tests with external dependencies +- **E2E Tests:** End-to-end workflow testing +- **Evaluation Tests:** LLM-based evaluation (PR only) +- **Evaluation Report:** LangSmith integration with PR comments -### 2. Test Coverage (`test-coverage.yml`) -**Triggers:** Push to main/develop, Pull Requests -**Purpose:** Test execution with coverage reporting +### 2. Preview Deployment (`preview-deployment.yml`) +**Triggers:** PR opened, synchronized, or reopened to main +**Purpose:** Create preview deployments for PR testing -- **Test Execution:** Runs all tests with coverage -- **Coverage Upload:** Uploads coverage reports to Codecov -- **PR Comments:** Reports test results on PRs +**Features:** +- **Docker Build:** Multi-platform Docker image with preview tag +- **LangGraph Deployment:** Deploy to preview environment +- **PR Comments:** Automatic status reporting +- **Preview URLs:** `https://text2sql-agent-pr-.langchain.dev` -### 3. Comprehensive Tests (`test-with-results.yml`) -**Triggers:** Push to main/develop, Pull Requests -**Purpose:** Detailed test execution with result artifacts +### 3. Production Deployment (`new-lgp-revision.yml`) +**Triggers:** PR closed (merged or not) +**Purpose:** Cleanup previews and deploy to production -- **Unit Tests:** Tests individual components -- **Integration Tests:** Tests with external dependencies -- **Evaluation Tests:** LLM-based evaluation tests -- **E2E Tests:** End-to-end workflow tests -- **LangSmith Integration:** Automated evaluation reporting +**Jobs:** +- **Cleanup Preview:** Remove preview deployment when PR closes +- **Build Production:** Build and push production Docker image (merged PRs only) +- **Deploy Production:** Deploy to production environment (merged PRs only) +- **Production URL:** `https://text2sql-agent-prod.langchain.dev` + + + +## Deployment Pipeline -### 4. Docker Deployment (`new-lgp-deployment.yml`) -**Triggers:** Push to main, Merged PRs to main -**Purpose:** Docker image building and deployment +### Pipeline Flow -- **Docker Build:** Builds multi-platform Docker image -- **Docker Push:** Pushes to Docker Hub registry -- **LangChain Deployment:** Deploys to LangChain hosted platform -- **Automated Tagging:** Creates semantic versioning tags +| Event | Action | Docker Tag | Deployment | +|-------|--------|------------|------------| +| Push to main/develop | Run comprehensive tests | - | - | +| PR open/sync | Build & deploy preview | `preview-` | `text2sql-agent-pr-` | +| PR close | Cleanup preview | - | Delete preview | +| PR merge | Deploy to production | `latest` | `text2sql-agent-prod` | +### Deployment Naming Convention +- **Preview Deployments:** `text2sql-agent-pr-` +- **Production Deployment:** `text2sql-agent-prod` +- **Docker Images:** + - Preview: `perinim98/text2sql-agent:preview-` + - Production: `perinim98/text2sql-agent:latest` + +### URLs + +- **Preview URLs:** `https://text2sql-agent-pr-.langchain.dev` +- **Production URL:** `https://text2sql-agent-prod.langchain.dev` ## Usage @@ -48,12 +70,11 @@ This directory contains the CI/CD workflows for the text2sql-agent project. The Use the Makefile targets that correspond to the CI workflows: ```bash -# Quality checks (equivalent to quality-checks.yml) +# Quality checks (equivalent to test-with-results.yml quality-checks job) make lint -make type-check make pre-commit -# Test coverage (equivalent to test-coverage.yml) +# Test execution (equivalent to test-with-results.yml test-coverage job) make test # Format code (auto-fix) @@ -62,37 +83,61 @@ make format ### CI/CD Pipeline The workflows run automatically on: -- **Every PR:** Quality checks, test coverage, comprehensive tests -- **Every push to main/develop:** All checks and tests -- **Push to main/Merged PRs:** Docker deployment to LangChain +- **Every PR:** Comprehensive tests, preview deployment +- **Every push to main/develop:** Comprehensive tests +- **PR close:** Cleanup preview deployment +- **PR merge:** Production deployment ### Required Secrets The following secrets must be configured in your GitHub repository: -- `OPENAI_API_KEY`: For OpenAI API access in tests -- `LANGSMITH_API_KEY`: For LangSmith integration +- `OPENAI_API_KEY`: For OpenAI API access in tests and application +- `LANGSMITH_API_KEY`: For LangGraph API integration - `LANGSMITH_TRACING`: For LangSmith tracing -- `CODECOV_TOKEN`: For coverage reporting +- `LANGSMITH_ENDPOINT`: For LangSmith endpoint configuration - `DOCKER_USERNAME`: Your Docker Hub username - `DOCKER_PASSWORD`: Your Docker Hub access token +## API Integration + +The pipeline uses a custom Python script (`.github/scripts/langgraph_api.py`) to interact with the LangGraph API: + +- **List Deployments:** Find existing preview deployments +- **Create Deployment:** Create new preview/production deployments +- **Update Deployment:** Update existing deployments with new images +- **Delete Deployment:** Clean up preview deployments + ## Workflow Benefits -1. **Parallel Execution:** Different test types run in parallel for faster feedback -2. **Caching:** UV dependencies are cached to speed up builds -3. **Artifact Management:** Test results and reports are preserved as artifacts -4. **PR Integration:** Automatic commenting and status reporting -5. **Quality Gates:** Multiple layers of quality assurance -6. **Automated Deployment:** Docker images built and deployed automatically +1. **No wasteful deployments:** Only deploys on PR events, not every push +2. **Preview environments:** Each PR gets its own preview deployment +3. **Automatic cleanup:** Preview deployments are automatically removed when PRs are closed +4. **Production safety:** Production only deploys when PRs are merged +5. **Cost optimization:** Preview deployments use minimal resources (scale to 0 when not in use) +6. **Parallel Execution:** Different test types run in parallel for faster feedback +7. **Caching:** UV dependencies are cached to speed up builds +8. **Artifact Management:** Test results and reports are preserved as artifacts +9. **PR Integration:** Automatic commenting and status reporting +10. **Quality Gates:** Multiple layers of quality assurance ## Troubleshooting ### Common Issues -1. **Cache Misses:** If builds are slow, check that cache keys are consistent -2. **Secret Errors:** Ensure all required secrets are properly configured -3. **Test Failures:** Check the specific test job logs for detailed error information -4. **Coverage Issues:** Verify Codecov token and repository configuration +1. **Preview deployment not found:** Check if the PR number is correct and the deployment was created successfully +2. **API errors:** Verify the `LANGSMITH_API_KEY` secret is correct +3. **Docker build failures:** Check the Dockerfile path and build context +4. **Production deployment fails:** Ensure the PR was actually merged, not just closed +5. **Cache Misses:** If builds are slow, check that cache keys are consistent +6. **Secret Errors:** Ensure all required secrets are properly configured +7. **Test Failures:** Check the specific test job logs for detailed error information + +### Debugging + +- Check GitHub Actions logs for detailed error messages +- Verify secrets are properly configured +- Test API calls manually using the script: `python .github/scripts/langgraph_api.py --help` +- The script supports a `--base-url` parameter for different LangGraph environments ### Manual Workflow Execution You can manually trigger workflows using the GitHub Actions UI or by dispatching workflow events via the GitHub API. diff --git a/.github/workflows/new-lgp-revision.yml b/.github/workflows/new-lgp-revision.yml index 4b950e2..c84c5eb 100644 --- a/.github/workflows/new-lgp-revision.yml +++ b/.github/workflows/new-lgp-revision.yml @@ -62,8 +62,10 @@ jobs: run: | python .github/scripts/langgraph_api.py \ --action cleanup-preview \ + --app-name text2sql-agent \ --pr-number ${{ github.event.pull_request.number }} \ - --api-key ${{ secrets.LANGSMITH_API_KEY }} + --api-key ${{ secrets.LANGSMITH_API_KEY }} \ + --base-url https://gtm.smith.langchain.dev/api-host/v2 # Job 2: Build and push production Docker image (only if PR was merged) build-and-push: @@ -164,9 +166,14 @@ jobs: run: | python .github/scripts/langgraph_api.py \ --action deploy-production \ + --app-name text2sql-agent \ --image-uri ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest \ --api-key ${{ secrets.LANGSMITH_API_KEY }} \ - --openai-api-key ${{ secrets.OPENAI_API_KEY }} + --secrets OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }} \ + --base-url https://gtm.smith.langchain.dev/api-host/v2 \ + --domain langchain.dev \ + --protocol https \ + --production-suffix prod # Generate deployment status report - name: Generate deployment status report diff --git a/.github/workflows/preview-deployment.yml b/.github/workflows/preview-deployment.yml index 509cf27..051ca53 100644 --- a/.github/workflows/preview-deployment.yml +++ b/.github/workflows/preview-deployment.yml @@ -83,10 +83,14 @@ jobs: run: | python .github/scripts/langgraph_api.py \ --action deploy-preview \ + --app-name text2sql-agent \ --pr-number ${{ github.event.pull_request.number }} \ --image-uri ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:preview-${{ github.event.pull_request.number }} \ --api-key ${{ secrets.LANGSMITH_API_KEY }} \ - --openai-api-key ${{ secrets.OPENAI_API_KEY }} + --secrets OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }} \ + --base-url https://gtm.smith.langchain.dev/api-host/v2 \ + --domain langchain.dev \ + --protocol https # Generate deployment status report - name: Generate deployment status report From cc2562f0c83822bced0d75cd70decad1a0a7bc3d Mon Sep 17 00:00:00 2001 From: Marco Perini Date: Sat, 4 Oct 2025 23:50:15 +0200 Subject: [PATCH 2/7] Potential fix for code scanning alert no. 11: Clear-text logging of sensitive information Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> --- .github/scripts/langgraph_api.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/scripts/langgraph_api.py b/.github/scripts/langgraph_api.py index f71780c..2b50d50 100755 --- a/.github/scripts/langgraph_api.py +++ b/.github/scripts/langgraph_api.py @@ -167,7 +167,7 @@ def parse_secrets( key, value = secret.split("=", 1) secrets.append({"name": key, "value": value}) else: - print(f"โš ๏ธ Warning: Secret '{secret}' should be in format KEY=VALUE") + print("โš ๏ธ Warning: A secret argument is not in the format KEY=VALUE and will be ignored.") # Parse secrets from environment variables for env_var in secrets_from_env: From 2ea73ed5196e6e1a4bc8a88a3e87c5f12d72aa67 Mon Sep 17 00:00:00 2001 From: perinim Date: Sun, 5 Oct 2025 09:28:34 +0200 Subject: [PATCH 3/7] fix: resources boundaries --- .github/scripts/README.md | 6 +++ .github/scripts/langgraph_api.py | 82 ++++++++++++++++---------------- 2 files changed, 48 insertions(+), 40 deletions(-) diff --git a/.github/scripts/README.md b/.github/scripts/README.md index 9e88727..2e1a68f 100644 --- a/.github/scripts/README.md +++ b/.github/scripts/README.md @@ -29,6 +29,12 @@ python langgraph_api.py --action deploy-preview --pr-number 123 --image-uri dock # Multiple secrets (from environment) python langgraph_api.py --action deploy-preview --pr-number 123 --image-uri docker.io/user/repo:preview-123 --api-key $LANGSMITH_API_KEY --secrets-from-env OPENAI_API_KEY ANTHROPIC_API_KEY DATABASE_URL REDIS_URL + +# Custom deployment name +python langgraph_api.py --action deploy-preview --pr-number 123 --image-uri docker.io/user/repo:preview-123 --api-key $LANGSMITH_API_KEY --deployment-name my-custom-preview-123 + +# Custom resources +python langgraph_api.py --action deploy-production --image-uri docker.io/user/repo:latest --api-key $LANGSMITH_API_KEY --min-scale 1 --max-scale 3 --cpu 2 --memory-mb 2048 ``` **Key Functions:** diff --git a/.github/scripts/langgraph_api.py b/.github/scripts/langgraph_api.py index 2b50d50..92db9fc 100755 --- a/.github/scripts/langgraph_api.py +++ b/.github/scripts/langgraph_api.py @@ -5,7 +5,6 @@ """ import argparse -import json import os import sys from typing import Any, Dict, List, Optional @@ -80,7 +79,15 @@ def create_deployment( } print(f"๐Ÿ“ค Sending deployment request to: {self.base_url}/deployments") - print(f"๐Ÿ“ฆ Payload: {request_body}") + + # Create a safe version of the payload for logging (hide secrets) + safe_payload = request_body.copy() + if "secrets" in safe_payload: + safe_payload["secrets"] = [ + {"name": secret["name"], "value": "***REDACTED***"} + for secret in safe_payload["secrets"] + ] + print(f"๐Ÿ“ฆ Payload: {safe_payload}") response = requests.post( f"{self.base_url}/deployments", headers=self.headers, json=request_body @@ -166,31 +173,24 @@ def parse_secrets( if "=" in secret: key, value = secret.split("=", 1) secrets.append({"name": key, "value": value}) + print(f"โœ… Added secret: {key}") else: - print("โš ๏ธ Warning: A secret argument is not in the format KEY=VALUE and will be ignored.") + print( + "โš ๏ธ Warning: A secret argument is not in the format KEY=VALUE and will be ignored." + ) # Parse secrets from environment variables for env_var in secrets_from_env: value = os.environ.get(env_var) if value: secrets.append({"name": env_var, "value": value}) + print(f"โœ… Added secret from environment: {env_var}") else: print(f"โš ๏ธ Warning: Environment variable '{env_var}' not found") return secrets -def load_config(config_path: str) -> Dict[str, Any]: - """Load configuration from JSON file.""" - try: - with open(config_path, "r") as f: - config = json.load(f) - return config - except (FileNotFoundError, json.JSONDecodeError) as e: - print(f"โŒ Failed to load config file {config_path}: {e}") - sys.exit(1) - - def deploy_preview( api: LangGraphAPI, pr_number: int, @@ -305,10 +305,10 @@ def main(): python langgraph_api.py --action deploy-preview --app-name my-llm-app --pr-number 123 --image-uri docker.io/user/my-llm-app:preview-123 --api-key $LANGSMITH_API_KEY # Deploy with custom secrets and resources - python langgraph_api.py --action deploy-production --app-name my-llm-app --image-uri docker.io/user/my-llm-app:latest --secrets OPENAI_API_KEY=sk-xxx --secrets-from-env DATABASE_URL --min-scale 0 --max-scale 3 --cpu 2.0 --memory-mb 2048 --api-key $LANGSMITH_API_KEY + python langgraph_api.py --action deploy-production --app-name my-llm-app --image-uri docker.io/user/my-llm-app:latest --secrets OPENAI_API_KEY=sk-xxx --secrets-from-env DATABASE_URL --min-scale 1 --max-scale 3 --cpu 2 --memory-mb 2048 --api-key $LANGSMITH_API_KEY - # Use configuration file - python langgraph_api.py --config deployment-config.json --action deploy-preview --pr-number 123 --api-key $LANGSMITH_API_KEY + # Cleanup preview deployment + python langgraph_api.py --action cleanup-preview --app-name my-llm-app --pr-number 123 --api-key $LANGSMITH_API_KEY """, ) @@ -325,9 +325,6 @@ def main(): help="LangGraph API base URL (default: https://api.host.langchain.com/v2)", ) - # Configuration file - parser.add_argument("--config", help="Configuration file path (JSON format)") - # Deployment naming parser.add_argument( "--app-name", @@ -364,16 +361,22 @@ def main(): # Resource specifications parser.add_argument( - "--min-scale", type=int, default=1, help="Minimum scale (default: 1)" + "--min-scale", + type=int, + default=1, + help="Minimum scale (default: 1, minimum: 1)", ) parser.add_argument( "--max-scale", type=int, default=1, help="Maximum scale (default: 1)" ) parser.add_argument( - "--cpu", type=float, default=1.0, help="CPU allocation (default: 1.0)" + "--cpu", type=int, default=1, help="CPU allocation (default: 1, minimum: 1)" ) parser.add_argument( - "--memory-mb", type=int, default=1024, help="Memory in MB (default: 1024)" + "--memory-mb", + type=int, + default=1024, + help="Memory in MB (default: 1024, minimum: 1024)", ) # URL configuration @@ -388,23 +391,22 @@ def main(): args = parser.parse_args() - # Load configuration file if provided - config = {} - if args.config: - config = load_config(args.config) - # Override args with config values (config takes precedence) - for key, value in config.items(): - if hasattr(args, key) and getattr(args, key) in [ - None, - [], - 0, - 1, - "text2sql-agent", - "prod", - "langchain.dev", - "https", - ]: - setattr(args, key, value) + # Validate resource specifications + if args.min_scale < 1: + print("โŒ Error: min-scale must be at least 1") + sys.exit(1) + + if args.cpu < 1: + print("โŒ Error: CPU must be at least 1") + sys.exit(1) + + if args.memory_mb < 1024: + print("โŒ Error: Memory must be at least 1024 MB") + sys.exit(1) + + if args.max_scale < args.min_scale: + print("โŒ Error: max-scale must be greater than or equal to min-scale") + sys.exit(1) # Parse secrets secrets = parse_secrets(args.secrets, args.secrets_from_env) From 38def9215085eeefe6a1a5439857542100ce2dcd Mon Sep 17 00:00:00 2001 From: perinim Date: Sun, 5 Oct 2025 09:37:02 +0200 Subject: [PATCH 4/7] chore: removed sensitive logging --- .github/scripts/langgraph_api.py | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/.github/scripts/langgraph_api.py b/.github/scripts/langgraph_api.py index 92db9fc..1868e74 100755 --- a/.github/scripts/langgraph_api.py +++ b/.github/scripts/langgraph_api.py @@ -79,15 +79,9 @@ def create_deployment( } print(f"๐Ÿ“ค Sending deployment request to: {self.base_url}/deployments") - - # Create a safe version of the payload for logging (hide secrets) - safe_payload = request_body.copy() - if "secrets" in safe_payload: - safe_payload["secrets"] = [ - {"name": secret["name"], "value": "***REDACTED***"} - for secret in safe_payload["secrets"] - ] - print(f"๐Ÿ“ฆ Payload: {safe_payload}") + print( + f"๐Ÿ“ฆ Deployment: {request_body.get('name')} with image: {request_body.get('source_revision_config', {}).get('image_uri')}" + ) response = requests.post( f"{self.base_url}/deployments", headers=self.headers, json=request_body @@ -173,7 +167,7 @@ def parse_secrets( if "=" in secret: key, value = secret.split("=", 1) secrets.append({"name": key, "value": value}) - print(f"โœ… Added secret: {key}") + print("โœ… Secret added") else: print( "โš ๏ธ Warning: A secret argument is not in the format KEY=VALUE and will be ignored." @@ -184,9 +178,11 @@ def parse_secrets( value = os.environ.get(env_var) if value: secrets.append({"name": env_var, "value": value}) - print(f"โœ… Added secret from environment: {env_var}") + print("โœ… Secret added from environment") else: - print(f"โš ๏ธ Warning: Environment variable '{env_var}' not found") + print( + "โš ๏ธ Warning: One of the required environment variables for a secret was not found" + ) return secrets From 2ae864b3ca2340aa80e68eeac3f5f02dc527d68a Mon Sep 17 00:00:00 2001 From: Marco Perini Date: Sun, 5 Oct 2025 09:38:43 +0200 Subject: [PATCH 5/7] Potential fix for code scanning alert no. 17: Clear-text logging of sensitive information Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> --- .github/scripts/langgraph_api.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/.github/scripts/langgraph_api.py b/.github/scripts/langgraph_api.py index 1868e74..eaff154 100755 --- a/.github/scripts/langgraph_api.py +++ b/.github/scripts/langgraph_api.py @@ -79,9 +79,9 @@ def create_deployment( } print(f"๐Ÿ“ค Sending deployment request to: {self.base_url}/deployments") - print( - f"๐Ÿ“ฆ Deployment: {request_body.get('name')} with image: {request_body.get('source_revision_config', {}).get('image_uri')}" - ) + # Log only non-sensitive data (do not log deployment name/image_uri, which could contain secrets) + print("๐Ÿ“ฆ Deployment request sent.") # Redacted name and image_uri from logs + response = requests.post( f"{self.base_url}/deployments", headers=self.headers, json=request_body From aea96fd0b2cb39104b03817a2fb6f3dbcd797cd0 Mon Sep 17 00:00:00 2001 From: perinim Date: Sun, 5 Oct 2025 09:42:49 +0200 Subject: [PATCH 6/7] chore: fix white trailing space --- .github/scripts/langgraph_api.py | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/.github/scripts/langgraph_api.py b/.github/scripts/langgraph_api.py index eaff154..6086d53 100755 --- a/.github/scripts/langgraph_api.py +++ b/.github/scripts/langgraph_api.py @@ -82,7 +82,6 @@ def create_deployment( # Log only non-sensitive data (do not log deployment name/image_uri, which could contain secrets) print("๐Ÿ“ฆ Deployment request sent.") # Redacted name and image_uri from logs - response = requests.post( f"{self.base_url}/deployments", headers=self.headers, json=request_body ) @@ -457,7 +456,7 @@ def main(): elif args.action == "cleanup-preview": if not args.pr_number: - print("โŒ PR number is required for preview cleanup") + print("โŒ PR number is required for preview cleanup.") sys.exit(1) deployment_name = args.deployment_name or f"{args.app_name}-pr-{args.pr_number}" cleanup_preview(api, args.pr_number, args.app_name, deployment_name) From 891bae9b90bc951f0b0875ffcfabc1f830f0d994 Mon Sep 17 00:00:00 2001 From: perinim Date: Fri, 10 Oct 2025 16:37:55 +0200 Subject: [PATCH 7/7] docs: readme --- README.md | 144 +++++++++++++++++++++++++++++------------------------- 1 file changed, 77 insertions(+), 67 deletions(-) diff --git a/README.md b/README.md index be8a6ee..c55d158 100644 --- a/README.md +++ b/README.md @@ -123,20 +123,18 @@ Example `langgraph.json`: } ``` -### Method 1: GitHub Integration from UI (Recommended for Cloud Users) +### Method 1: LangSmith Deployment UI (Cloud Only) -Connect your GitHub repository directly to LangGraph Platform: +Deploy your agent using the LangSmith deployment interface for cloud deployments: -1. Go to your LangGraph Platform dashboard -2. Connect your GitHub repository by providing GitHub permissions -3. The platform will automatically build and deploy your agent from your repository -4. No manual Docker image building or pushing required +1. Go to your LangSmith dashboard +2. Navigate to the Deployments section +3. Connect your GitHub repository and specify the agent path **Benefits:** -- Simplest deployment method for cloud users -- Automatic build and deployment -- No manual Docker image management +- Simple UI-based deployment - Direct integration with your GitHub repository +- No manual Docker image management required ### Method 2: Build Docker Image with LangGraph CLI @@ -152,22 +150,12 @@ docker push my-agent:latest You can push to any container registry (Docker Hub, AWS ECR, Azure ACR, Google GCR, etc.) that your deployment environment has access to. -See the [LangGraph CLI build documentation](https://docs.langchain.com/langgraph-platform/cli#build) for more details. - -### Method 3: Generate Dockerfile - -Create a custom Dockerfile for more control: - -```bash -# Generate Dockerfile from langgraph.json -uv run langgraph dockerfile -c langgraph.json Dockerfile +**Deployment Options:** +- **Cloud LangSmith**: Use the Control Plane API to create deployments from your container registry +- **Self-Hosted/Hybrid LangSmith**: Choose between LangSmith UI or Control Plane API -# Build and push manually -docker build -t my-agent:latest . -docker push my-agent:latest -``` +See the [LangGraph CLI build documentation](https://docs.langchain.com/langgraph-platform/cli#build) for more details. -See the [LangGraph CLI dockerfile documentation](https://docs.langchain.com/langgraph-platform/cli#dockerfile) for more details. ### Local Development & Testing @@ -189,21 +177,26 @@ This will: See the [LangGraph CLI documentation](https://docs.langchain.com/langgraph-platform/cli#dev) for more details. -### Deploy to LangGraph Platform +### Deploy to LangSmith -#### Cloud Deployment (LangSmith Cloud) +#### Cloud Deployment -Deploy using the [LangGraph Platform Control Plane API](https://docs.langchain.com/langgraph-platform/api-ref-control-plane#langgraph-control-plane-api-reference) to create deployments from your container registry. +Deploy using the LangSmith deployment UI or the [Control Plane API](https://docs.langchain.com/langgraph-platform/api-ref-control-plane#langgraph-control-plane-api-reference): + +- **UI Method**: Connect your GitHub repository directly in the LangSmith UI +- **API Method**: Use the Control Plane API to create deployments from your container registry (required for Docker images) ![Cloud Deployment UI](assets/cloud-lgp.png) -#### Self-Hosted Deployment +#### Self-Hosted/Hybrid Deployment For [self-hosted LangSmith instances](https://docs.langchain.com/langgraph-platform/deploy-self-hosted-full-platform): 1. Ensure your Kubernetes cluster has access to your container registry -2. Create a new deployment from the LangSmith UI -3. Specify your image URI (e.g., `docker.io/username/my-agent:latest`) +2. Build and push your Docker image to your container registry +3. Choose your deployment method: + - **LangSmith UI**: Create a new deployment and specify your image URI (e.g., `docker.io/username/my-agent:latest`) + - **Control Plane API**: Use the API to create deployments from your container registry **Note**: Self-hosted deployments don't distinguish between development/production types, but you can use tags to organize them. @@ -211,6 +204,15 @@ For [self-hosted LangSmith instances](https://docs.langchain.com/langgraph-platf See the [self-hosted full platform deployment guide](https://docs.langchain.com/langgraph-platform/deploy-self-hosted-full-platform) for detailed setup instructions. +### Connect to Your Deployed Agent + +Once your agent is deployed, you can connect to it using several methods: + +- **[LangGraph SDK](https://docs.langchain.com/langgraph-platform/sdk)**: Use the LangGraph SDK for programmatic integration +- **[RemoteGraph](https://docs.langchain.com/langgraph-platform/use-remote-graph)**: Connect using RemoteGraph for remote graph connections (to use your graph in other graphs) +- **[REST API](https://docs.langchain.com/langgraph-platform/server-api-ref)**: Use HTTP-based interactions with your deployed agent +- **[LangGraph Studio](https://docs.langchain.com/langgraph-platform/langgraph-studio)**: Access the visual interface for testing and debugging + ### Environment Configuration #### Database & Cache Configuration @@ -233,39 +235,45 @@ Remember to add all necessary environment variables to your deployment, includin ```mermaid graph TD - A[Agent Implementation] --> B[langgraph.json] + A[Agent Implementation] --> B[langgraph.json + dependencies] B --> C[Test Locally with langgraph dev] - C --> D{Local Test Passed?} - D -->|No| E[Fix Issues] + C --> D{Errors?} + D -->|Yes| E[Fix Issues] E --> C - D -->|Yes| F[Choose Deployment Method] - - F --> G[Method 1: GitHub Integration from UI] - F --> H[Method 2: langgraph build] - F --> I[Method 3: langgraph dockerfile] - - G --> J[Connect GitHub Repo] - J --> K[Auto Build & Deploy] - - H --> L[Build Docker Image] - I --> M[Generate Dockerfile] - M --> N[Build Docker Image Manually] - - L --> O[Push to Container Registry] - N --> O - - K --> P[Deploy to LangGraph Platform] - O --> P - P --> Q{Deployment Type?} + D -->|No| F[Choose LangSmith Instance] + + F --> G[Cloud LangSmith] + F --> H[Self-Hosted/Hybrid LangSmith] + + subgraph "Cloud LangSmith" + G --> I[Method 1: Connect GitHub Repo in UI] + G --> J[Method 2: Build Docker Image] + I --> K[Deploy via LangSmith UI] + J --> L[Build Docker Image] + L --> M[Push to Container Registry] + M --> N[Deploy via Control Plane API] + end - Q -->|Cloud| R[Use Control Plane API or GitHub] - Q -->|Self-Hosted| S[Use LangSmith UI] + subgraph "Self-Hosted/Hybrid LangSmith" + H --> S[Build Docker Image] + S --> T[Push to Container Registry] + T --> U{Deploy via?} + U -->|UI| V[Specify Image URI in UI] + U -->|API| W[Use Control Plane API] + V --> X[Deploy via LangSmith UI] + W --> Y[Deploy via Control Plane API] + end - R --> T[Production Deployment] - S --> T + K --> AA[Agent Ready for Use] + N --> AA + X --> AA + Y --> AA - T --> U[Monitor with LangSmith] - U --> V[Agent Ready for Use] + AA --> BB{Connect via?} + BB -->|LangGraph SDK| CC[Use LangGraph SDK] + BB -->|RemoteGraph| DD[Use RemoteGraph] + BB -->|REST API| EE[Use REST API] + BB -->|LangGraph Studio UI| FF[Use LangGraph Studio UI] ``` ### Deployment Best Practices @@ -304,18 +312,20 @@ graph TD A2[Prompt Commit in PromptHub] --> B1 A3[Online Evaluation Alert] --> B1 - B1 --> C1[Run Unit Tests on Nodes] - B1 --> C2[Run Integration Tests] - B1 --> C3[Run End to End Tests on Graph] + subgraph "Testing" + B1 --> C1[Run Unit Tests on Nodes] + B1 --> C2[Run Integration Tests] + B1 --> C3[Run End to End Tests on Graph] - C1 --> D1[Run Offline Evaluations] - C2 --> D1 - C3 --> D1 + C1 --> D1[Run Offline Evaluations] + C2 --> D1 + C3 --> D1 - D1 --> E1[Evaluate with OpenEvals or AgentEvals] - D1 --> E2[Assertions: Hard and Soft] + D1 --> E1[Evaluate with OpenEvals or AgentEvals] + D1 --> E2[Assertions: Hard and Soft] + end - E1 --> F1[Push to Staging Deployment - Spin new Docker deployment in LGP as Development Type] + E1 --> F1[Push to Staging Deployment - Deploy to LangSmith as Development Type] E2 --> F1 F1 --> G1[Run Online Evaluations on Live Data] @@ -326,7 +336,7 @@ graph TD I1 --> J2[Trigger Alert via Webhook] I1 --> J3[Push Trace to Golden Dataset] - F1 --> K1[Promote to Production if All Pass - Spin Production Deployment in LGP] + F1 --> K1[Promote to Production if All Pass - Deploy to LangSmith Production] J2 --> L1[Slack or PagerDuty Notification]