Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion evaluators/azure/poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion evaluators/azure/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "langevals-azure"
version = "0.1.11"
version = "0.1.12"
description = "LangEvals Azure Content Safety evaluator for LLM outputs."
authors = ["Rogerio Chaves <rogerio@langwatch.ai>", "Yevhenii Budnyk <y.budnyk789@gmail.com>"]
license = "MIT"
Expand Down
2 changes: 1 addition & 1 deletion evaluators/example/poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion evaluators/example/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "langevals-example"
version = "0.1.10"
version = "0.1.11"
description = "LangEvals boilerplate example evaluator for LLMs."
authors = ["Your Name <your.email@example.com>"]
license = "MIT"
Expand Down
2 changes: 1 addition & 1 deletion evaluators/huggingface/poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion evaluators/huggingface/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "langevals-huggingface"
version = "0.1.9"
version = "0.1.10"
description = "Evaluators based on models available on huggingface."
authors = [
"Rogerio Chaves <rogerio@langwatch.ai>",
Expand Down
2 changes: 1 addition & 1 deletion evaluators/langevals/poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion evaluators/langevals/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "langevals-langevals"
version = "0.1.10"
version = "0.1.11"
description = "LangEvals core evaluators."
authors = ["Rogerio Chaves <rogerio@langwatch.ai>", "Yevhenii Budnyk <y.budnyk789@gmail.com>"]
license = "MIT"
Expand Down
4 changes: 2 additions & 2 deletions evaluators/langevals/tests/test_llm_boolean.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,8 +80,8 @@ def test_llm_as_judge_atla_ai():
prompt="Is the recipe vegetarian?",
),
env={
"LITELLM_api_key": os.getenv("ATLA_API_KEY", ""),
"LITELLM_api_base": "https://api.atla-ai.com/v1",
"X_LITELLM_api_key": os.getenv("ATLA_API_KEY", ""),
"X_LITELLM_api_base": "https://api.atla-ai.com/v1",
},
)

Expand Down
4 changes: 2 additions & 2 deletions evaluators/langevals/tests/test_llm_category.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,8 +89,8 @@ def test_llm_as_judge_atla_ai():
],
),
env={
"LITELLM_api_key": os.getenv("ATLA_API_KEY", ""),
"LITELLM_api_base": "https://api.atla-ai.com/v1",
"X_LITELLM_api_key": os.getenv("ATLA_API_KEY", ""),
"X_LITELLM_api_base": "https://api.atla-ai.com/v1",
},
)

Expand Down
4 changes: 2 additions & 2 deletions evaluators/langevals/tests/test_llm_score.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,8 @@ def test_llm_as_judge_atla_ai():
prompt="You are an LLM evaluator. Please score from 0.0 to 1.0 how likely the user is to be satisfied with this answer, from 0.0 being not satisfied at all to 1.0 being completely satisfied.",
),
env={
"LITELLM_api_key": os.getenv("ATLA_API_KEY", ""),
"LITELLM_api_base": "https://api.atla-ai.com/v1",
"X_LITELLM_api_key": os.getenv("ATLA_API_KEY", ""),
"X_LITELLM_api_base": "https://api.atla-ai.com/v1",
},
)
result = evaluator.evaluate(
Expand Down
2 changes: 1 addition & 1 deletion evaluators/legacy/poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion evaluators/legacy/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "langevals-legacy"
version = "0.1.11"
version = "0.1.12"
description = "LangEvals Legacy evaluator"
authors = [
"Rogerio Chaves <rogerio@langwatch.ai>",
Expand Down
2 changes: 1 addition & 1 deletion evaluators/lingua/poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion evaluators/lingua/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "langevals-lingua"
version = "0.1.9"
version = "0.1.10"
description = "LangEvals lingua evaluator for language detection."
authors = ["Rogerio Chaves <rogerio@langwatch.ai>"]
license = "MIT"
Expand Down
2 changes: 1 addition & 1 deletion evaluators/openai/poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion evaluators/openai/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "langevals-openai"
version = "0.1.10"
version = "0.1.11"
description = "LangEvals OpenAI moderation evaluator for LLM outputs."
authors = ["Rogerio Chaves <rogerio@langwatch.ai>"]
license = "MIT"
Expand Down
2 changes: 1 addition & 1 deletion evaluators/presidio/poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion evaluators/presidio/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "langevals-presidio"
version = "0.1.4"
version = "0.1.5"
description = "LangEvals integration for Presidio"
authors = ["Rogerio Chaves <rogerio@langwatch.ai>"]
license = "MIT"
Expand Down
2 changes: 1 addition & 1 deletion evaluators/ragas/poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion evaluators/ragas/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "langevals-ragas"
version = "0.1.11"
version = "0.1.12"
description = "LangEvals Ragas evaluator"
authors = [
"Rogerio Chaves <rogerio@langwatch.ai>",
Expand Down
2 changes: 1 addition & 1 deletion langevals_core/langevals_core/base_evaluator.py
Original file line number Diff line number Diff line change
Expand Up @@ -246,7 +246,7 @@ def get_env(self, var: str):
def set_model_envs(self):
# Those variables may be used non-explicitly, so we need to set them globally here for the arguments given
for key, value in (self.env or {}).items():
if key in models_env_vars or key.startswith("LITELLM_"):
if key in models_env_vars or key.startswith("X_LITELLM_"):
os.environ[key] = value

# azure alias for litellm
Expand Down
20 changes: 13 additions & 7 deletions langevals_core/langevals_core/litellm_patch.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,14 @@ def patched_completion(*args, **kwargs):

for key, value in os.environ.items():
if (
key.startswith("LITELLM_")
and not key.startswith("LITELLM_EMBEDDINGS_")
and key != "LITELLM_LOG"
and key != "LITELLM_LOCAL_MODEL_COST_MAP"
key.startswith("X_LITELLM_")
and not key.startswith("X_LITELLM_EMBEDDINGS_")
):
kwargs[key.replace("LITELLM_", "")] = value
replaced_key = key.replace("X_LITELLM_", "")
# check if key is all uppercase, likely not a litellm key and got here by accident
if replaced_key.isupper():
continue
kwargs[replaced_key] = value

Comment on lines +32 to 40
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Harden env->kwargs mapping: don’t override reserved/caller args; ignore empty values.

As written, X_LITELLM_* can override reserved keys like drop_params, cache, model, or any caller-provided kwarg, weakening the “safer” intent. Add guards.

Apply this diff:

             if (
                 key.startswith("X_LITELLM_")
                 and not key.startswith("X_LITELLM_EMBEDDINGS_")
             ):
                 replaced_key = key.replace("X_LITELLM_", "")
                 # check if key is all uppercase, likely not a litellm key and got here by accident
                 if replaced_key.isupper():
                     continue
+                # don't let env override reserved or caller-provided args; ignore empty values
+                if replaced_key in {"drop_params", "cache", "model", "vertex_credentials"}:
+                    continue
+                if replaced_key in kwargs:
+                    continue
+                if value in ("", None):
+                    continue
                 kwargs[replaced_key] = value
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
key.startswith("X_LITELLM_")
and not key.startswith("X_LITELLM_EMBEDDINGS_")
):
kwargs[key.replace("LITELLM_", "")] = value
replaced_key = key.replace("X_LITELLM_", "")
# check if key is all uppercase, likely not a litellm key and got here by accident
if replaced_key.isupper():
continue
kwargs[replaced_key] = value
if (
key.startswith("X_LITELLM_")
and not key.startswith("X_LITELLM_EMBEDDINGS_")
):
replaced_key = key.replace("X_LITELLM_", "")
# check if key is all uppercase, likely not a litellm key and got here by accident
if replaced_key.isupper():
continue
# don't let env override reserved or caller-provided args; ignore empty values
if replaced_key in {"drop_params", "cache", "model", "vertex_credentials"}:
continue
if replaced_key in kwargs:
continue
if value in ("", None):
continue
kwargs[replaced_key] = value
🤖 Prompt for AI Agents
In langevals_core/langevals_core/litellm_patch.py around lines 32-40, the
env->kwargs mapping currently replaces keys from X_LITELLM_* even if they
collide with reserved names or with kwargs already passed, and it also accepts
empty values; update the loop to 1) skip any replaced_key that is in a reserved
set (e.g., {"drop_params","cache","model"} plus any other internal names) or
already present in kwargs to avoid overriding caller-provided args, 2) ignore
values that are None or empty strings (treat them as absent), and 3) continue to
preserve the existing uppercase check; if a key is skipped, do not set it in
kwargs. Ensure the check order is: compute replaced_key, skip if
replaced_key.isupper(), skip if value is falsy/empty, skip if replaced_key in
reserved set or replaced_key in kwargs, otherwise assign kwargs[replaced_key] =
value.

return _original_completion(*args, **kwargs)

Expand All @@ -49,8 +51,12 @@ def patched_embedding(*args, **kwargs):
# if os.environ.get("GOOGLE_APPLICATION_CREDENTIALS") is not None:
# kwargs["vertex_credentials"] = os.environ["GOOGLE_APPLICATION_CREDENTIALS"]
for key, value in os.environ.items():
if key.startswith("LITELLM_EMBEDDINGS_"):
kwargs[key.replace("LITELLM_EMBEDDINGS_", "")] = value
if key.startswith("X_LITELLM_EMBEDDINGS_"):
replaced_key = key.replace("X_LITELLM_EMBEDDINGS_", "")
# check if key is all uppercase, likely not a litellm key and got here by accident
if replaced_key.isupper():
continue
kwargs[replaced_key] = value
return _original_embedding(*args, **kwargs)
Comment on lines +54 to 60
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Apply the same hardening for embeddings env mapping.

Mirror the reserved/empty/override guards.

Apply this diff:

         for key, value in os.environ.items():
-            if key.startswith("X_LITELLM_EMBEDDINGS_"):
-                replaced_key = key.replace("X_LITELLM_EMBEDDINGS_", "")
-                # check if key is all uppercase, likely not a litellm key and got here by accident
-                if replaced_key.isupper():
-                    continue
-                kwargs[replaced_key] = value
+            if key.startswith("X_LITELLM_EMBEDDINGS_"):
+                replaced_key = key.replace("X_LITELLM_EMBEDDINGS_", "")
+                # check if key is all uppercase, likely not a litellm key and got here by accident
+                if replaced_key.isupper():
+                    continue
+                # don't let env override reserved or caller-provided args; ignore empty values
+                if replaced_key in {"drop_params", "model"}:
+                    continue
+                if replaced_key in kwargs:
+                    continue
+                if value in ("", None):
+                    continue
+                kwargs[replaced_key] = value
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if key.startswith("X_LITELLM_EMBEDDINGS_"):
replaced_key = key.replace("X_LITELLM_EMBEDDINGS_", "")
# check if key is all uppercase, likely not a litellm key and got here by accident
if replaced_key.isupper():
continue
kwargs[replaced_key] = value
return _original_embedding(*args, **kwargs)
for key, value in os.environ.items():
- if key.startswith("X_LITELLM_EMBEDDINGS_"):
- replaced_key = key.replace("X_LITELLM_EMBEDDINGS_", "")
- # check if key is all uppercase, likely not a litellm key and got here by accident
- if replaced_key.isupper():
- continue
if key.startswith("X_LITELLM_EMBEDDINGS_"):
replaced_key = key.replace("X_LITELLM_EMBEDDINGS_", "")
# check if key is all uppercase, likely not a litellm key and got here by accident
if replaced_key.isupper():
continue
# don't let env override reserved or caller-provided args; ignore empty values
if replaced_key in {"drop_params", "model"}:
continue
if replaced_key in kwargs:
continue
if value in ("", None):
continue
kwargs[replaced_key] = value
return _original_embedding(*args, **kwargs)
🤖 Prompt for AI Agents
In langevals_core/langevals_core/litellm_patch.py around lines 54 to 60, the
embeddings env mapping block needs the same hardening as the other env mappings:
add guards to skip processing when replaced_key is empty, skip when
replaced_key.isupper() (already present), skip when replaced_key is in the
module's reserved keys list (use the existing RESERVED_ENV or RESERVED_KEYS
constant used elsewhere in this file), and skip if replaced_key already exists
in kwargs to avoid overriding explicit values; only after these checks set
kwargs[replaced_key] = value and then return _original_embedding(*args,
**kwargs).


litellm.embedding = patched_embedding
Expand Down
2 changes: 1 addition & 1 deletion langevals_core/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "langevals-core"
version = "0.1.14"
version = "0.1.15"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Patch bump OK; verify that the env-var prefix change isn’t a breaking release.

Given the switch to X_LITELLM_* in core logic, confirm no consumers depend on LITELLM_* from process env. If that was externally observable, consider a minor (0.1.x -> 0.2.0) instead of patch.

Run to surface leftover LITELLM_* usage and align deps:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Leftover LITELLM_* references (should be empty):"
rg -nP -C2 '\bLITELLM_' -g '!**/dist/**' -g '!**/build/**' -g '!**/.venv/**' || true

echo
echo "New X_LITELLM_* keys usage (expect hits):"
rg -nP -C1 '\bX_LITELLM_(api_key|api_base|EMBEDDINGS_)'

echo
echo "litellm constraints across packages (ensure consistency):"
rg -nP '^\s*litellm\s*=\s*"[^^][^"]*|^\s*litellm\s*=\s*\^" -g '**/pyproject.toml' || true

Length of output: 2436


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "litellm dependency constraints across all pyproject.toml files (ensure consistency):"
rg -nP '^\s*litellm\s*=\s*(".*?"|\{.*?\})' -g '**/pyproject.toml' -g '!**/dist/**' -g '!**/build/**' -g '!**/.venv/**'

Length of output: 696


Bump to 0.2.0 to reflect breaking env-var change
Verified no residual LITELLM_* references and consistent litellm deps, but switching to X_LITELLM_* is externally breaking. In langevals_core/pyproject.toml (line 3), change

-version = "0.1.15"
+version = "0.2.0"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
version = "0.1.15"
++ b/langevals_core/pyproject.toml
@@ lines 1-5
version = "0.2.0"
🤖 Prompt for AI Agents
In langevals_core/pyproject.toml around line 3, update the package version from
"0.1.15" to "0.2.0" to reflect the breaking change of switching environment
variable prefixes (LITELLM_* → X_LITELLM_*); modify the version string
accordingly and verify that any release notes or changelogs and downstream
dependency references are updated to indicate the breaking env-var change.

description = "Core package for LLM evaluation platform, providing base classes and utilities."
authors = [
"Rogerio Chaves <rogerio@langwatch.ai>",
Expand Down
20 changes: 10 additions & 10 deletions poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading