Skip to content

Conversation

rogeriochaves
Copy link
Contributor

@rogeriochaves rogeriochaves commented Sep 2, 2025

Summary by CodeRabbit

  • Refactor
    • Updated environment variable handling to use X_LITELLM_* and X_LITELLM_EMBEDDINGS_* prefixes for model and embedding configuration; previous LITELLM_* prefixes are no longer auto-exported. Uppercase-only keys are ignored. Azure alias handling unchanged.
  • Tests
    • Updated test configurations to the new X_LITELLM_* environment variable keys.
  • Chores
    • Bumped patch versions across core and evaluator packages (azure, example, huggingface, langevals, legacy, lingua, openai, presidio, ragas).

Copy link

coderabbitai bot commented Sep 2, 2025

Walkthrough

Patch-level version bumps across multiple packages. Tests update environment variable keys from LITELLM_* to X_LITELLM_. Core logic changes map and export env vars using X_LITELLM_ (and X_LITELLM_EMBEDDINGS_ for embeddings) instead of LITELLM_*. No public API signatures changed.

Changes

Cohort / File(s) Summary
Metadata version bumps
evaluators/azure/pyproject.toml, evaluators/example/pyproject.toml, evaluators/huggingface/pyproject.toml, evaluators/langevals/pyproject.toml, evaluators/legacy/pyproject.toml, evaluators/lingua/pyproject.toml, evaluators/openai/pyproject.toml, evaluators/presidio/pyproject.toml, evaluators/ragas/pyproject.toml, langevals_core/pyproject.toml
Patch versions incremented; no other metadata or dependency changes.
Tests: env key rename
evaluators/langevals/tests/test_llm_boolean.py, evaluators/langevals/tests/test_llm_category.py, evaluators/langevals/tests/test_llm_score.py
Renamed env keys from LITELLM_api_key/LITELLM_api_base to X_LITELLM_api_key/X_LITELLM_api_base.
Core: env mapping change
langevals_core/langevals_core/base_evaluator.py, langevals_core/langevals_core/litellm_patch.py
Auto-export/mapping now uses X_LITELLM_* prefixes; embeddings use X_LITELLM_EMBEDDINGS_*. Added guard to skip all-uppercase replaced keys; removed prior exclusions tied to old prefixes.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant T as Test/Evaluator
    participant BE as BaseEvaluator.set_model_envs
    participant ENV as Process Env
    participant LP as litellm_patch.patched_completion
    participant L as _original_completion

    T->>BE: Provide env with X_LITELLM_* keys
    BE->>ENV: Export model envs (X_LITELLM_* + known vars)
    note right of BE: Azure alias handling unchanged
    T->>LP: Invoke completion(**kwargs)
    LP->>LP: Map X_LITELLM_* -> kwargs (skip all-uppercase, exclude X_LITELLM_EMBEDDINGS_*)
    LP->>L: Call original completion(**mapped kwargs)
    L-->>T: Return result

    rect rgb(235, 245, 255)
    note over BE,LP: Changed: X_LITELLM_* prefix handling for completions
    end
Loading
sequenceDiagram
    autonumber
    participant T as Test/Evaluator
    participant LP as litellm_patch.patched_embedding
    participant E as _original_embedding

    T->>LP: Invoke embedding(**kwargs)
    LP->>LP: Map X_LITELLM_EMBEDDINGS_* -> kwargs (skip all-uppercase)
    LP->>E: Call original embedding(**mapped kwargs)
    E-->>T: Return embedding
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

I twitch my ears at version chimes,
Little hops through patchwork times.
X marks envs where secrets flow,
Lit paths tell embeddings where to go.
Carrots compiled, tests aligned—
A tidy warren, streamlined, refined. 🥕✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch x-litellm-safer-variable

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@rogeriochaves rogeriochaves merged commit ccc8280 into main Sep 2, 2025
2 of 3 checks passed
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (5)
evaluators/ragas/pyproject.toml (1)

3-3: Patch release OK; consider guarding against upstream breaking changes.

You’re using ragas==0.2.9 and open-ended langchain deps (>=). To reduce churn, consider adding an upper bound (e.g., <0.2.x+1 for major/minor) if CI isn’t catching breaks.

langevals_core/langevals_core/base_evaluator.py (1)

247-251: Avoid global env mutation races; consider scoped env.

Setting os.environ in init can leak across concurrent evaluators/tests. Prefer a scoped context around the actual LLM calls.

Example context manager (apply where calls happen):

from contextlib import contextmanager

@contextmanager
def temp_env(updates: dict[str, str]):
    old = {k: os.environ.get(k) for k in updates}
    try:
        os.environ.update({k: v for k, v in updates.items()})
        yield
    finally:
        for k, v in old.items():
            if v is None:
                os.environ.pop(k, None)
            else:
                os.environ[k] = v

Then wrap the evaluation that depends on these vars with temp_env(self.env or {}).

evaluators/langevals/tests/test_llm_boolean.py (1)

83-85: Skip gracefully when ATLA_API_KEY is missing.

Prevents flaky external calls on CI when the token isn’t set.

Add a guard at the start of the test:

import pytest

if not os.getenv("ATLA_API_KEY"):
    pytest.skip("ATLA_API_KEY not set; skipping external ATLA test")
evaluators/langevals/tests/test_llm_category.py (1)

92-94: Guard test when ATLA_API_KEY is absent.

Avoids CI flakes due to 401s against external API.

Add the same pytest skip guard as suggested in test_llm_boolean.py.

evaluators/langevals/tests/test_llm_score.py (1)

81-83: Skip when ATLA_API_KEY is not set.

Prevents external dependency failures on CI.

Add the same pytest skip guard as suggested in test_llm_boolean.py.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between e4abef3 and 0301782.

⛔ Files ignored due to path filters (10)
  • evaluators/azure/poetry.lock is excluded by !**/*.lock
  • evaluators/example/poetry.lock is excluded by !**/*.lock
  • evaluators/huggingface/poetry.lock is excluded by !**/*.lock
  • evaluators/langevals/poetry.lock is excluded by !**/*.lock
  • evaluators/legacy/poetry.lock is excluded by !**/*.lock
  • evaluators/lingua/poetry.lock is excluded by !**/*.lock
  • evaluators/openai/poetry.lock is excluded by !**/*.lock
  • evaluators/presidio/poetry.lock is excluded by !**/*.lock
  • evaluators/ragas/poetry.lock is excluded by !**/*.lock
  • poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (15)
  • evaluators/azure/pyproject.toml (1 hunks)
  • evaluators/example/pyproject.toml (1 hunks)
  • evaluators/huggingface/pyproject.toml (1 hunks)
  • evaluators/langevals/pyproject.toml (1 hunks)
  • evaluators/langevals/tests/test_llm_boolean.py (1 hunks)
  • evaluators/langevals/tests/test_llm_category.py (1 hunks)
  • evaluators/langevals/tests/test_llm_score.py (1 hunks)
  • evaluators/legacy/pyproject.toml (1 hunks)
  • evaluators/lingua/pyproject.toml (1 hunks)
  • evaluators/openai/pyproject.toml (1 hunks)
  • evaluators/presidio/pyproject.toml (1 hunks)
  • evaluators/ragas/pyproject.toml (1 hunks)
  • langevals_core/langevals_core/base_evaluator.py (1 hunks)
  • langevals_core/langevals_core/litellm_patch.py (2 hunks)
  • langevals_core/pyproject.toml (1 hunks)
🔇 Additional comments (14)
evaluators/example/pyproject.toml (1)

3-3: Version bump looks fine.

No functional changes here. Ensure lockfile is regenerated and published artifacts reference the updated core.

evaluators/huggingface/pyproject.toml (1)

3-3: Version bump LGTM; keep an eye on httpx compatibility.

No code changes; deps unchanged. Verify your httpx and litellm versions remain compatible with the patched env-var behavior.

evaluators/azure/pyproject.toml (1)

3-3: Patch bump approved.

No functional diffs; dependency set unchanged. Ship with updated lockfile.

evaluators/presidio/pyproject.toml (1)

3-3: Version bump looks good.

No functional impact here.

evaluators/lingua/pyproject.toml (1)

3-3: Version bump approved.

Consistent with the repo-wide patch releases.

evaluators/langevals/pyproject.toml (1)

3-3: Version increment OK.

Given the env var prefix change elsewhere, ensure release notes mention the X_LITELLM_* migration.

evaluators/openai/pyproject.toml (1)

3-3: Version bump LGTM.

No dependency changes tied to this file.

langevals_core/langevals_core/base_evaluator.py (2)

247-251: Aligns auto-export with X_LITELLM_ — good.*

Matches the safer-prefix strategy and keeps provider vars working.


247-251: No bare LITELLM_* references detected. Only intentional X_LITELLM_* usages remain in litellm_patch.py and its tests.

evaluators/legacy/pyproject.toml (1)

3-3: Version bump looks good.

Patch-level update aligned with the rest of the PR.

evaluators/langevals/tests/test_llm_boolean.py (1)

83-85: Env key rename LGTM.

Matches new X_LITELLM_* mapping.

evaluators/langevals/tests/test_llm_category.py (1)

92-94: Env key rename LGTM.

Consistent with litellm patch.

evaluators/langevals/tests/test_llm_score.py (1)

81-83: Env key rename LGTM.

Aligned with X_LITELLM_* convention.

langevals_core/langevals_core/litellm_patch.py (1)

32-40: No lingering LITELLM_ usages detected. Ripgrep search for unprefixed LITELLM_ returned no results, confirming all legacy prefixes have been removed.

Comment on lines +32 to 40
key.startswith("X_LITELLM_")
and not key.startswith("X_LITELLM_EMBEDDINGS_")
):
kwargs[key.replace("LITELLM_", "")] = value
replaced_key = key.replace("X_LITELLM_", "")
# check if key is all uppercase, likely not a litellm key and got here by accident
if replaced_key.isupper():
continue
kwargs[replaced_key] = value

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Harden env->kwargs mapping: don’t override reserved/caller args; ignore empty values.

As written, X_LITELLM_* can override reserved keys like drop_params, cache, model, or any caller-provided kwarg, weakening the “safer” intent. Add guards.

Apply this diff:

             if (
                 key.startswith("X_LITELLM_")
                 and not key.startswith("X_LITELLM_EMBEDDINGS_")
             ):
                 replaced_key = key.replace("X_LITELLM_", "")
                 # check if key is all uppercase, likely not a litellm key and got here by accident
                 if replaced_key.isupper():
                     continue
+                # don't let env override reserved or caller-provided args; ignore empty values
+                if replaced_key in {"drop_params", "cache", "model", "vertex_credentials"}:
+                    continue
+                if replaced_key in kwargs:
+                    continue
+                if value in ("", None):
+                    continue
                 kwargs[replaced_key] = value
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
key.startswith("X_LITELLM_")
and not key.startswith("X_LITELLM_EMBEDDINGS_")
):
kwargs[key.replace("LITELLM_", "")] = value
replaced_key = key.replace("X_LITELLM_", "")
# check if key is all uppercase, likely not a litellm key and got here by accident
if replaced_key.isupper():
continue
kwargs[replaced_key] = value
if (
key.startswith("X_LITELLM_")
and not key.startswith("X_LITELLM_EMBEDDINGS_")
):
replaced_key = key.replace("X_LITELLM_", "")
# check if key is all uppercase, likely not a litellm key and got here by accident
if replaced_key.isupper():
continue
# don't let env override reserved or caller-provided args; ignore empty values
if replaced_key in {"drop_params", "cache", "model", "vertex_credentials"}:
continue
if replaced_key in kwargs:
continue
if value in ("", None):
continue
kwargs[replaced_key] = value
🤖 Prompt for AI Agents
In langevals_core/langevals_core/litellm_patch.py around lines 32-40, the
env->kwargs mapping currently replaces keys from X_LITELLM_* even if they
collide with reserved names or with kwargs already passed, and it also accepts
empty values; update the loop to 1) skip any replaced_key that is in a reserved
set (e.g., {"drop_params","cache","model"} plus any other internal names) or
already present in kwargs to avoid overriding caller-provided args, 2) ignore
values that are None or empty strings (treat them as absent), and 3) continue to
preserve the existing uppercase check; if a key is skipped, do not set it in
kwargs. Ensure the check order is: compute replaced_key, skip if
replaced_key.isupper(), skip if value is falsy/empty, skip if replaced_key in
reserved set or replaced_key in kwargs, otherwise assign kwargs[replaced_key] =
value.

Comment on lines +54 to 60
if key.startswith("X_LITELLM_EMBEDDINGS_"):
replaced_key = key.replace("X_LITELLM_EMBEDDINGS_", "")
# check if key is all uppercase, likely not a litellm key and got here by accident
if replaced_key.isupper():
continue
kwargs[replaced_key] = value
return _original_embedding(*args, **kwargs)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Apply the same hardening for embeddings env mapping.

Mirror the reserved/empty/override guards.

Apply this diff:

         for key, value in os.environ.items():
-            if key.startswith("X_LITELLM_EMBEDDINGS_"):
-                replaced_key = key.replace("X_LITELLM_EMBEDDINGS_", "")
-                # check if key is all uppercase, likely not a litellm key and got here by accident
-                if replaced_key.isupper():
-                    continue
-                kwargs[replaced_key] = value
+            if key.startswith("X_LITELLM_EMBEDDINGS_"):
+                replaced_key = key.replace("X_LITELLM_EMBEDDINGS_", "")
+                # check if key is all uppercase, likely not a litellm key and got here by accident
+                if replaced_key.isupper():
+                    continue
+                # don't let env override reserved or caller-provided args; ignore empty values
+                if replaced_key in {"drop_params", "model"}:
+                    continue
+                if replaced_key in kwargs:
+                    continue
+                if value in ("", None):
+                    continue
+                kwargs[replaced_key] = value
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if key.startswith("X_LITELLM_EMBEDDINGS_"):
replaced_key = key.replace("X_LITELLM_EMBEDDINGS_", "")
# check if key is all uppercase, likely not a litellm key and got here by accident
if replaced_key.isupper():
continue
kwargs[replaced_key] = value
return _original_embedding(*args, **kwargs)
for key, value in os.environ.items():
- if key.startswith("X_LITELLM_EMBEDDINGS_"):
- replaced_key = key.replace("X_LITELLM_EMBEDDINGS_", "")
- # check if key is all uppercase, likely not a litellm key and got here by accident
- if replaced_key.isupper():
- continue
if key.startswith("X_LITELLM_EMBEDDINGS_"):
replaced_key = key.replace("X_LITELLM_EMBEDDINGS_", "")
# check if key is all uppercase, likely not a litellm key and got here by accident
if replaced_key.isupper():
continue
# don't let env override reserved or caller-provided args; ignore empty values
if replaced_key in {"drop_params", "model"}:
continue
if replaced_key in kwargs:
continue
if value in ("", None):
continue
kwargs[replaced_key] = value
return _original_embedding(*args, **kwargs)
🤖 Prompt for AI Agents
In langevals_core/langevals_core/litellm_patch.py around lines 54 to 60, the
embeddings env mapping block needs the same hardening as the other env mappings:
add guards to skip processing when replaced_key is empty, skip when
replaced_key.isupper() (already present), skip when replaced_key is in the
module's reserved keys list (use the existing RESERVED_ENV or RESERVED_KEYS
constant used elsewhere in this file), and skip if replaced_key already exists
in kwargs to avoid overriding explicit values; only after these checks set
kwargs[replaced_key] = value and then return _original_embedding(*args,
**kwargs).

@@ -1,6 +1,6 @@
[tool.poetry]
name = "langevals-core"
version = "0.1.14"
version = "0.1.15"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Patch bump OK; verify that the env-var prefix change isn’t a breaking release.

Given the switch to X_LITELLM_* in core logic, confirm no consumers depend on LITELLM_* from process env. If that was externally observable, consider a minor (0.1.x -> 0.2.0) instead of patch.

Run to surface leftover LITELLM_* usage and align deps:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Leftover LITELLM_* references (should be empty):"
rg -nP -C2 '\bLITELLM_' -g '!**/dist/**' -g '!**/build/**' -g '!**/.venv/**' || true

echo
echo "New X_LITELLM_* keys usage (expect hits):"
rg -nP -C1 '\bX_LITELLM_(api_key|api_base|EMBEDDINGS_)'

echo
echo "litellm constraints across packages (ensure consistency):"
rg -nP '^\s*litellm\s*=\s*"[^^][^"]*|^\s*litellm\s*=\s*\^" -g '**/pyproject.toml' || true

Length of output: 2436


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "litellm dependency constraints across all pyproject.toml files (ensure consistency):"
rg -nP '^\s*litellm\s*=\s*(".*?"|\{.*?\})' -g '**/pyproject.toml' -g '!**/dist/**' -g '!**/build/**' -g '!**/.venv/**'

Length of output: 696


Bump to 0.2.0 to reflect breaking env-var change
Verified no residual LITELLM_* references and consistent litellm deps, but switching to X_LITELLM_* is externally breaking. In langevals_core/pyproject.toml (line 3), change

-version = "0.1.15"
+version = "0.2.0"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
version = "0.1.15"
++ b/langevals_core/pyproject.toml
@@ lines 1-5
version = "0.2.0"
🤖 Prompt for AI Agents
In langevals_core/pyproject.toml around line 3, update the package version from
"0.1.15" to "0.2.0" to reflect the breaking change of switching environment
variable prefixes (LITELLM_* → X_LITELLM_*); modify the version string
accordingly and verify that any release notes or changelogs and downstream
dependency references are updated to indicate the breaking env-var change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants