Skip to content

Commit 32a096d

Browse files
Merge pull request #33 from shcherbak-ai/dev
v0.6.1
2 parents 84d56bd + 314b7f8 commit 32a096d

File tree

17 files changed

+2164
-1910
lines changed

17 files changed

+2164
-1910
lines changed

CHANGELOG.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,10 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
55

66
- **Refactor**: Code reorganization that doesn't change functionality but improves structure or maintainability
77

8+
## [0.6.1](https://github.com/shcherbak-ai/contextgem/releases/tag/v0.6.1) - 2025-06-04
9+
### Changed
10+
- Updated documentation for LM Studio models to clarify dummy API key requirement
11+
812
## [0.6.0](https://github.com/shcherbak-ai/contextgem/releases/tag/v0.6.0) - 2025-06-03
913
### Added
1014
- LabelConcept - a classification concept type that categorizes content using predefined labels.

CONTRIBUTING.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -179,6 +179,7 @@ You **must** re-record cassettes if:
179179
- You modified any parameters in LLM API calls
180180
- You're writing a new test that calls the LLM API
181181
- The existing cassettes are no longer compatible with your changes
182+
- You made changes to local LLM API interactions (requires Ollama and LM Studio installed on your system)
182183
183184
#### How to Re-record Cassettes
184185
@@ -195,7 +196,9 @@ You **must** re-record cassettes if:
195196
```
196197
CONTEXTGEM_LOGGER_LEVEL=DEBUG
197198
```
198-
4. Run your tests, which will create new cassette files
199+
4. **For local LLM testing**: Install [Ollama](https://ollama.ai/) and [LM Studio](https://lmstudio.ai/) on your system if you're testing local model interactions
200+
201+
5. Run your tests, which will create new cassette files
199202

200203
**Important**: Re-recording cassettes will use your OpenAI API key and may incur charges to your account based on the number and type of API calls made during testing. Please be aware of these potential costs before re-recording.
201204

contextgem/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020
ContextGem - Effortless LLM extraction from documents
2121
"""
2222

23-
__version__ = "0.6.0"
23+
__version__ = "0.6.1"
2424
__author__ = "Shcherbak AI AS"
2525

2626
from contextgem.public import (

dev/notebooks/docs/concepts/date_concept/refs_and_justifications.ipynb

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@
6161
"completion_date_concept = DateConcept(\n",
6262
" name=\"Project completion date\",\n",
6363
" description=\"The final completion date for the website redesign project\",\n",
64-
" add_justifications=True, # enable justifications to understand extraction reasoning\n",
64+
" add_justifications=True, # enable justifications to understand extraction logic\n",
6565
" justification_depth=\"balanced\",\n",
6666
" justification_max_sents=3, # allow up to 3 sentences for the calculation justification\n",
6767
" add_references=True, # include references to source text\n",
@@ -74,11 +74,10 @@
7474
"\n",
7575
"# Configure DocumentLLM\n",
7676
"llm = DocumentLLM(\n",
77-
" model=\"azure/o4-mini\",\n",
77+
" model=\"azure/gpt-4.1\",\n",
7878
" api_key=os.getenv(\"CONTEXTGEM_AZURE_OPENAI_API_KEY\"),\n",
7979
" api_version=os.getenv(\"CONTEXTGEM_AZURE_OPENAI_API_VERSION\"),\n",
8080
" api_base=os.getenv(\"CONTEXTGEM_AZURE_OPENAI_API_BASE\"),\n",
81-
" reasoning_effort=\"medium\",\n",
8281
")\n",
8382
"\n",
8483
"# Extract the concept\n",

dev/requirements/requirements.dev.txt

Lines changed: 207 additions & 207 deletions
Large diffs are not rendered by default.

dev/requirements/requirements.main.txt

Lines changed: 207 additions & 207 deletions
Large diffs are not rendered by default.

dev/usage_examples/docs/concepts/date_concept/refs_and_justifications.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@
2626
completion_date_concept = DateConcept(
2727
name="Project completion date",
2828
description="The final completion date for the website redesign project",
29-
add_justifications=True, # enable justifications to understand extraction reasoning
29+
add_justifications=True, # enable justifications to understand extraction logic
3030
justification_depth="balanced",
3131
justification_max_sents=3, # allow up to 3 sentences for the calculation justification
3232
add_references=True, # include references to source text
@@ -39,11 +39,10 @@
3939

4040
# Configure DocumentLLM
4141
llm = DocumentLLM(
42-
model="azure/o4-mini",
42+
model="azure/gpt-4.1",
4343
api_key=os.getenv("CONTEXTGEM_AZURE_OPENAI_API_KEY"),
4444
api_version=os.getenv("CONTEXTGEM_AZURE_OPENAI_API_VERSION"),
4545
api_base=os.getenv("CONTEXTGEM_AZURE_OPENAI_API_BASE"),
46-
reasoning_effort="medium",
4746
)
4847

4948
# Extract the concept
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
from contextgem import DocumentLLM
2+
3+
llm = DocumentLLM(
4+
model="lm_studio/meta-llama-3.1-8b-instruct",
5+
api_base="http://localhost:1234/v1",
6+
api_key="dummy-key", # dummy key to avoid connection error
7+
)
8+
9+
# This is a known issue with calling LM Studio API in litellm:
10+
# https://github.com/openai/openai-python/issues/961

docs/docs-raw-for-llm.txt

Lines changed: 48 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5894,7 +5894,7 @@ date information was inferred:
58945894
completion_date_concept = DateConcept(
58955895
name="Project completion date",
58965896
description="The final completion date for the website redesign project",
5897-
add_justifications=True, # enable justifications to understand extraction reasoning
5897+
add_justifications=True, # enable justifications to understand extraction logic
58985898
justification_depth="balanced",
58995899
justification_max_sents=3, # allow up to 3 sentences for the calculation justification
59005900
add_references=True, # include references to source text
@@ -5907,11 +5907,10 @@ date information was inferred:
59075907

59085908
# Configure DocumentLLM
59095909
llm = DocumentLLM(
5910-
model="azure/o4-mini",
5910+
model="azure/gpt-4.1",
59115911
api_key=os.getenv("CONTEXTGEM_AZURE_OPENAI_API_KEY"),
59125912
api_version=os.getenv("CONTEXTGEM_AZURE_OPENAI_API_VERSION"),
59135913
api_base=os.getenv("CONTEXTGEM_AZURE_OPENAI_API_BASE"),
5914-
reasoning_effort="medium",
59155914
)
59165915

59175916
# Extract the concept
@@ -7613,6 +7612,29 @@ Using local LLM providers
76137612
# see DocumentLLM API reference for all configuration options
76147613
)
76157614

7615+
Note:
7616+
7617+
**LM Studio Connection Error**: If you encounter a connection error
7618+
("litellm.APIError: APIError: Lm_studioException - Connection
7619+
error") when using LM Studio, check that you have provided a dummy
7620+
API key. While API keys are usually not expected for local models,
7621+
this is a specific case where LM Studio requires one:LM Studio with
7622+
dummy API key
7623+
7624+
from contextgem import DocumentLLM
7625+
7626+
llm = DocumentLLM(
7627+
model="lm_studio/meta-llama-3.1-8b-instruct",
7628+
api_base="http://localhost:1234/v1",
7629+
api_key="dummy-key", # dummy key to avoid connection error
7630+
)
7631+
7632+
# This is a known issue with calling LM Studio API in litellm:
7633+
# https://github.com/openai/openai-python/issues/961
7634+
7635+
This is a known issue with calling LM Studio API in litellm:
7636+
https://github.com/openai/openai-python/issues/961
7637+
76167638
For a complete list of configuration options available when
76177639
initializing DocumentLLM instances, see the next section Configuring
76187640
LLM(s).
@@ -7687,6 +7709,29 @@ Using a local LLM
76877709
# see DocumentLLM API reference for all configuration options
76887710
)
76897711

7712+
Note:
7713+
7714+
**LM Studio Connection Error**: If you encounter a connection error
7715+
("litellm.APIError: APIError: Lm_studioException - Connection
7716+
error") when using LM Studio, check that you have provided a dummy
7717+
API key. While API keys are usually not expected for local models,
7718+
this is a specific case where LM Studio requires one:LM Studio with
7719+
dummy API key
7720+
7721+
from contextgem import DocumentLLM
7722+
7723+
llm = DocumentLLM(
7724+
model="lm_studio/meta-llama-3.1-8b-instruct",
7725+
api_base="http://localhost:1234/v1",
7726+
api_key="dummy-key", # dummy key to avoid connection error
7727+
)
7728+
7729+
# This is a known issue with calling LM Studio API in litellm:
7730+
# https://github.com/openai/openai-python/issues/961
7731+
7732+
This is a known issue with calling LM Studio API in litellm:
7733+
https://github.com/openai/openai-python/issues/961
7734+
76907735

76917736
📝 Configuration Parameters
76927737
===========================

docs/source/conf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@
2222
project = "ContextGem"
2323
copyright = "2025, Shcherbak AI AS"
2424
author = "Sergii Shcherbak"
25-
release = "0.6.0"
25+
release = "0.6.1"
2626

2727

2828
# Add path to the package

docs/source/llms/llm_config.rst

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,15 @@ For local models, usually you need to specify the ``api_base`` instead of the AP
4040
:language: python
4141
:caption: Using a local LLM
4242

43+
.. note::
44+
**LM Studio Connection Error**: If you encounter a connection error (``litellm.APIError: APIError: Lm_studioException - Connection error``) when using LM Studio, check that you have provided a dummy API key. While API keys are usually not expected for local models, this is a specific case where LM Studio requires one:
45+
46+
.. literalinclude:: ../../../dev/usage_examples/docs/llms/llm_init/lm_studio_connection_error_fix.py
47+
:language: python
48+
:caption: LM Studio with dummy API key
49+
50+
This is a known issue with calling LM Studio API in litellm: https://github.com/openai/openai-python/issues/961
51+
4352

4453
📝 Configuration Parameters
4554
-----------------------------

docs/source/llms/supported_llms.rst

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,5 +50,14 @@ For local LLMs, you'll need to specify the provider, model name, and the appropr
5050
:language: python
5151
:caption: Using local LLM providers
5252

53+
.. note::
54+
**LM Studio Connection Error**: If you encounter a connection error (``litellm.APIError: APIError: Lm_studioException - Connection error``) when using LM Studio, check that you have provided a dummy API key. While API keys are usually not expected for local models, this is a specific case where LM Studio requires one:
55+
56+
.. literalinclude:: ../../../dev/usage_examples/docs/llms/llm_init/lm_studio_connection_error_fix.py
57+
:language: python
58+
:caption: LM Studio with dummy API key
59+
60+
This is a known issue with calling LM Studio API in litellm: https://github.com/openai/openai-python/issues/961
61+
5362

5463
For a complete list of configuration options available when initializing DocumentLLM instances, see the next section :doc:`llm_config`.

0 commit comments

Comments
 (0)