Skip to content

Conversation

@YuweiXiao
Copy link

@YuweiXiao YuweiXiao commented Sep 10, 2025

With this PR, users can now utilize claude-context in a fully local manner, powered by a local Ollama embedding service and vector engine provided by a local PostgreSQL docker container. All claude context-related operations occur entirely locally, with zero code transmission over the internet.

To enable PostgreSQL as vector backend, set through

VECTOR_DATABASE_PROVIDER = "postgres"
POSTGRES_CONNECTION_STRING = "postgresql://username:password@host:port/db"

@tan-yong-sheng
Copy link

+1 hope for this support, thanks a lot for the work.

@prom3theu5
Copy link

prom3theu5 commented Oct 29, 2025

The work to support pgvector is great. IMO the data provider should have abstract implementations of each that way we can expand them out and rather than just marketing this locked to Milvus expand it out so the entire community can use it with whatever they want :)

no need to add the ollama backend code. Ollama already provides an openapi compatible api, and you can configure the base openapi url to use ollama or lm studio already

would be nice if we got some clear response as to yes or know if this will be considered for merging. If it is once I get some free time (not till after Christmas / feb) I’ll take it upon myself to implement chroma, pinecone and qdrant

my time is very limited atm unfortunately

@YuweiXiao
Copy link
Author

@prom3theu5 hi, thanks for taking a look. The repo’s been inactive for a month, so not sure if it’s still maintained. Ollama embeddings are already supported in the core package — I extended the code to make them configurable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants