Skip to content

invariantlabs-ai/invariant-gateway

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Invariant Gateway

LLM proxy to observe and debug what your AI agents are doing.

Documentation | Quickstart for Users | Quickstart for Developers | Run Locally

Invariant Gateway is a lightweight zero-configuration service that acts as an intermediary between AI Agents and LLM providers (such as OpenAI and Anthropic).

Gateway automatically traces agent interactions and stores them in the Invariant Explorer, giving you insights into what your agents are doing. This allows you to observe and debug your agents in Invariant Explorer.



Invariant Gateway Diagram


  • Single Line Setup: Just change the base URL of your LLM provider to the Invariant Gateway.
  • Intercepts agents on an LLM-level for better debugging and analysis.
  • Tool Calling and Computer Use Support to capture all forms of agentic interactions.
  • Seamless forwarding and LLM streaming to OpenAI, Anthropic, and other LLM providers.
  • Store and organize runtime traces in the Invariant Explorer.

Quickstart for Teams and Users

Looking to observe and secure AI agents in your organization? See our no-code quickstart guide for users to get started.

Quickstart for Developers

To add Gateway to your agentic system, follow one of the integration guides below, depending on the LLM provider.

Integration Guides

๐Ÿ”น OpenAI Integration

Gateway supports the OpenAI Chat Completions API (/v1/chat/completions endpoint).

  1. Follow these steps to obtain an OpenAI API key.

  2. Modify OpenAI Client Setup

    Instead of connecting directly to OpenAI, configure your OpenAI client to use Gateway:

    from httpx import Client
    from openai import OpenAI
    
    client = OpenAI(
        http_client=Client(
            headers={
                "Invariant-Authorization": "Bearer your-invariant-api-key"
            },
        ),
        base_url="https://explorer.invariantlabs.ai/api/v1/gateway/{add-your-dataset-name-here}/openai",
    )

    Note: Do not include the curly braces {}. If the dataset does not exist in Invariant Explorer, it will be created before adding traces.

๐Ÿ”น Anthropic Integration

Gateway supports the Anthropic Messages API (/v1/messages endpoint).

  1. Follow these steps to obtain an Anthropic API key.

  2. Modify Anthropic Client Setup

    from httpx import Client
    from anthropic import Anthropic
    
    client = Anthropic(
        http_client=Client(
            headers={
                "Invariant-Authorization": "Bearer your-invariant-api-key"
            },
        ),
        base_url="https://explorer.invariantlabs.ai/api/v1/gateway/{add-your-dataset-name-here}/anthropic",
    )

    Note: Do not include the curly braces {}. If the dataset does not exist in Invariant Explorer, it will be created before adding traces.

๐Ÿ”น Gemini Integration

Gateway supports the Gemini generateContent and streamGenerateContent methods.

  1. Follow these steps to obtain a Gemini API key.

  2. Modify Gemini Client Setup

    import os
    
    from google import genai
    
    client = genai.Client(
         api_key=os.environ["GEMINI_API_KEY"],
         http_options={
             "base_url": "https://explorer.invariantlabs.ai/api/v1/gateway/{add-your-dataset-name-here}/gemini",
             "headers": {
                 "Invariant-Authorization": "Bearer your-invariant-api-key"
             },
         },
     )

    Note: Do not include the curly braces {}. If the dataset does not exist in Invariant Explorer, it will be created before adding traces.

๐Ÿ”น OpenAI Swarm Integration

Integrating directly with a specific agent framework is also supported, simply by configuring the underlying LLM client.

For instance, OpenAI Swarm relies on OpenAI's Python client, the setup is very similar to the standard OpenAI integration:

from swarm import Swarm, Agent
from openai import OpenAI
from httpx import Client
import os

client = Swarm(
    client=OpenAI(
        http_client=Client(headers={"Invariant-Authorization": "Bearer " + os.getenv("INVARIANT_API_KEY", "")}),
        base_url="https://explorer.invariantlabs.ai/api/v1/gateway/weather-swarm-agent/openai",
    )
)


def get_weather():
    return "It's sunny."


agent = Agent(
    name="Agent A",
    instructions="You are a helpful agent.",
    functions=[get_weather],
)

response = client.run(
    agent=agent,
    messages=[{"role": "user", "content": "What's the weather?"}],
)

print(response.messages[-1]["content"])
# Output: "It seems to be sunny."

๐Ÿ”น LiteLLM Integration

LiteLLM is a python library that acts as a unified interface for calling multiple LLM providers. If you are using it, it is very convinient to connect to Gateway proxy. You just need to pass the correct base_url.

from litellm import completion
import random
import os

base_url = "/api/v1/gateway/litellm/{add-your-dataset-name-here}"
EXAMPLE_MODELS = ["openai/gpt-4o", "gemini/gemini-2.0-flash", "anthropic/claude-3-5-haiku-20241022"]
model = random.choice(SAMPLE_MODELS)

base_url += "/" + model.split("/")[0] # append /gemini /openai or /anthropic. 

if model.split("/")[0] == "gemini":
    base_url += f"/v1beta/models/{model.split('/')[1]}" # gemini expects the model name in the url.


chat_response = completion(
    model=model,
    messages=[{"role": "user", "content": "What is the capital of France?"}],
    extra_headers= {"Invariant-Authorization": "Bearer <some-key>"},
    stream=True,
    base_url=base_url,
)

print(chat_response.choices[0].message.content)
# Output: "Paris."

๐Ÿ”น Microsoft Autogen Integration

You can also easily integrate the Gateway with Microsoft Autogen as follows:

import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

import os
from httpx import AsyncClient

async def main() -> None:
    client = OpenAIChatCompletionClient(
        model="gpt-4o",
        http_client=AsyncClient(headers={"Invariant-Authorization": "Bearer " + os.getenv("INVARIANT_API_KEY", "")}),
        base_url="https://explorer.invariantlabs.ai/api/v1/gateway/weather-swarm-agent/openai",
    )
    agent = AssistantAgent("assistant", client)
    print(await agent.run(task="Say 'Hello World!'"))


asyncio.run(main())
# Output: "Hello World!"

This will automatically trace your agent interactions in Invariant Explorer.


Quickstart for Users

If you are not building an agent yourself but would like to observe and secure AI agents in your organization, you can do so by configuring the agents to use the Gateway.

See below for example integrations with popular agents.

OpenHands Integration

OpenHands (formerly OpenDevin) is a platform for software development agents powered by AI.

How to Integrate OpenHands with Invariant Gateway

Step 1: Modify the API Base

Enable the Advanced Options toggle under Settings and update the Base URL to the following

https://explorer.invariantlabs.ai/api/v1/gateway/{add-your-dataset-name-here}/openai

Step 2: Adjust the API Key Format

Set the API Key using the following format:

{your-llm-api-key};invariant-auth={your-invariant-api-key}

Note: Do not include the curly braces {}.

The Invariant Gateway extracts the invariant-auth field from the API key and correctly forwards it to Invariant Explorer while sending the actual API key to OpenAI or Anthropic.


SWE-agent Integration

SWE-agent allows your preferred language model (e.g., GPT-4o or Claude Sonnet 3.5) to autonomously utilize tools for various tasks, such as fixing issues in real GitHub repositories.

Using SWE-agent with Invariant Gateway

SWE-agent does not support custom headers, so you cannot pass the Invariant API Key via Invariant-Authorization. However, there is a workaround using the Invariant Gateway.

Step 1: Modify the API Base

Run sweagent with the following flag:

--agent.model.api_base=https://explorer.invariantlabs.ai/api/v1/gateway/{add-your-dataset-name-here}/openai

Note: Do not include the curly braces {}.

Step 2: Adjust the API Key Format

Instead of setting your API Key normally, modify the environment variable as follows:

export OPENAI_API_KEY={your-openai-api-key};invariant-auth={your-invariant-api-key}
export ANTHROPIC_API_KEY={your-anthropic-api-key};invariant-auth={your-invariant-api-key}

Note: Do not include the curly braces {}.

This setup ensures that SWE-agent works seamlessly with Invariant Gateway, maintaining compatibility while enabling full functionality. ๐Ÿš€


Run the Gateway Locally

You can also operate your own instance of the Gateway, to ensure privacy and security.

To run Gateway locally, you have two options:

1. Run gateway from the repository

  1. Clone this repository.

  2. To start the Invariant Gateway, then run the following commands. Note that you need to have Docker installed.

cd invariant-gateway
bash run.sh build && bash run.sh up

This will launch Gateway at http://localhost:8005/api/v1/gateway/.

2. Run the Gateway using the published Docker image

You can also run the Gateway using the published Docker image. This is a good option if you want to run the Gateway in a cloud environment.

# pull the latest image
docker pull --platform linux/amd64 ghcr.io/invariantlabs-ai/invariant-gateway/gateway:latest
# run Gateway on localhost:8005
docker run -p 8005:8005 -e PORT=8005 --platform linux/amd64 ghcr.io/invariantlabs-ai/invariant-gateway/gateway:latest

This will launch Gateway at http://localhost:8005/api/v1/gateway/. This instance will automatically push your traces to https://explorer.invariantlabs.ai.

Set Up an Invariant API Key

  1. Follow the instructions here to obtain an API key. This allows the gateway to push traces to Invariant Explorer.

Development

Pushing to Local Explorer

By default Gateway points to the public Explorer instance at explorer.invariantlabs.ai. To point it to your local Explorer instance, modify the INVARIANT_API_URL value inside .env. Follow instructions in .env on how to point to the local instance.

Run Unit Tests

To run the unit tests, execute:

bash run.sh unit-tests

Run Integration Tests

To run the integration tests, execute:

bash run.sh integration-tests

To run a subset of the integration tests, execute:

bash run.sh integration-tests open_ai/test_chat_with_tool_call.py

Releases

No releases published

Packages

 
 
 

Contributors 4

  •  
  •  
  •  
  •