Skip to content

Something went wrong. Please check your internet connection. / Ollama Setup on Ubuntu #1697

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
DrNokkel opened this issue Mar 13, 2025 · 4 comments

Comments

@DrNokkel
Copy link

I performed the standard installation using the setup.sh file.
After everything was successfully executed for the Ollama setup (without modifications, just the default settings), I can access the frontend. There, I can upload files and adjust options.
However, the chat does not work.
Keep saying this error message
Image

Ollama is accessible via the default port and indicates that it is running.

I tried installing Ollama separately and linking it, but the same error occurs. In the terminal, I can chat with the LLM without any issues.

I also tried adjusting the .env file afterward, but that didn’t help either.

I am using the system with Ubuntu.

@dartpain
Copy link
Contributor

Hmm,

Can you provide me with your .env file and please tell me which ollama model you are using?

Also do you see any error inside your backend container?

@DrNokkel
Copy link
Author

My .env file

API_KEY=None
LLM_NAME=openai
MODEL_NAME=llama3.2
VITE_API_STREAMING=true
OPENAI_BASE_URL=http://localhost:11434
EMBEDDINGS_NAME=huggingface_sentence-transformers/all-mpnet-base-v2

Model llama3.2
Image

......:~/DocsGPT$ curl http://localhost:11434/api/tags
{"models":[{"name":"llama3.2:latest","model":"llama3.2:latest","modified_at":"2025-03-13T14:54:04.558023555+01:00","size":2019393189,"digest":"a80c4f17acd55265feec403c7aef86be0c25983ab279d83f3bcd3abbcb5b8b72","details":{"parent_model":"","format":"gguf","family":"llama","families":["llama"],"parameter_size":"3.2B","quantization_level":"Q4_K_M"}}]}

...:/DocsGPT$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
12f24b40f91c deployment-backend "gunicorn -w 2 --tim…" 3 hours ago Up About an hour 0.0.0.0:7091->7091/tcp deployment-backend-1

Logs from backend

...:/DocsGPT$ :/DocsGdocker logs 12f24b40f91c
[2025-03-13 13:55:05 +0000] [1] [INFO] Starting gunicorn 23.0.0
[2025-03-13 13:55:05 +0000] [1] [INFO] Listening at: http://0.0.0.0:7091 (1)
[2025-03-13 13:55:05 +0000] [1] [INFO] Using worker: sync
[2025-03-13 13:55:05 +0000] [6] [INFO] Booting worker with pid: 6
[2025-03-13 13:55:05 +0000] [7] [INFO] Booting worker with pid: 7
[2025-03-13 13:55:10,494] WARNING in user_agent: USER_AGENT environment variable not set, consider setting it to identify your requests.
[2025-03-13 13:55:10,495] WARNING in user_agent: USER_AGENT environment variable not set, consider setting it to identify your requests.
[2025-03-13 13:55:24,428] INFO in routes: /api/answer - request_data: {'question': 'hallo', 'history': '[{"prompt":"hallo"}]', 'conversation_id': None, 'prompt_id': 'default', 'chunks': '2', 'token_limit': 2000, 'isNoneDoc': False, 'active_docs': '67d19f3baecd9a5377eb17ff', 'retriever': 'classic'}, source: {'active_docs': '67d19f3baecd9a5377eb17ff'}
[2025-03-13 13:55:24,436] ERROR in routes: /api/answer - error: No LLM class found for type - traceback: Traceback (most recent call last):
File "/app/application/api/answer/routes.py", line 539, in post
agent = AgentCreator.create_agent(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/application/agents/agent_creator.py", line 14, in create_agent
return agent_class(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/application/agents/classic_agent.py", line 21, in init
super().init(endpoint, llm_name, gpt_model, api_key, user_api_key)
File "/app/application/agents/base.py", line 14, in init
self.llm = LLMCreator.create_llm(
^^^^^^^^^^^^^^^^^^^^^^
File "/app/application/llm/llm_creator.py", line 30, in create_llm
raise ValueError(f"No LLM class found for type {type}")
ValueError: No LLM class found for type

[2025-03-13 13:59:46 +0000] [1] [INFO] Handling signal: term
[2025-03-13 13:59:47 +0000] [1] [INFO] Shutting down: Master
[2025-03-13 13:59:52 +0000] [1] [INFO] Starting gunicorn 23.0.0
[2025-03-13 13:59:52 +0000] [1] [INFO] Listening at: http://0.0.0.0:7091 (1)
[2025-03-13 13:59:52 +0000] [1] [INFO] Using worker: sync
[2025-03-13 13:59:52 +0000] [7] [INFO] Booting worker with pid: 7
[2025-03-13 13:59:52 +0000] [8] [INFO] Booting worker with pid: 8
[2025-03-13 13:59:56,760] WARNING in user_agent: USER_AGENT environment variable not set, consider setting it to identify your requests.
[2025-03-13 13:59:56,760] WARNING in user_agent: USER_AGENT environment variable not set, consider setting it to identify your requests.
[2025-03-13 14:00:10,729] INFO in routes: /api/answer - request_data: {'question': 'hallo', 'history': '[{"prompt":"hallo"}]', 'conversation_id': None, 'prompt_id': 'default', 'chunks': '2', 'token_limit': 2000, 'isNoneDoc': False, 'active_docs': '67d19f3baecd9a5377eb17ff', 'retriever': 'classic'}, source: {'active_docs': '67d19f3baecd9a5377eb17ff'}
[2025-03-13 14:00:10,731] ERROR in routes: /api/answer - error: No LLM class found for type - traceback: Traceback (most recent call last):
File "/app/application/api/answer/routes.py", line 539, in post
agent = AgentCreator.create_agent(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/application/agents/agent_creator.py", line 14, in create_agent
return agent_class(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/application/agents/classic_agent.py", line 21, in init
super().init(endpoint, llm_name, gpt_model, api_key, user_api_key)
File "/app/application/agents/base.py", line 14, in init
self.llm = LLMCreator.create_llm(
^^^^^^^^^^^^^^^^^^^^^^
File "/app/application/llm/llm_creator.py", line 30, in create_llm
raise ValueError(f"No LLM class found for type {type}")
ValueError: No LLM class found for type

[2025-03-13 15:33:17 +0000] [1] [INFO] Starting gunicorn 23.0.0
[2025-03-13 15:33:17 +0000] [1] [INFO] Listening at: http://0.0.0.0:7091 (1)
[2025-03-13 15:33:17 +0000] [1] [INFO] Using worker: sync
[2025-03-13 15:33:17 +0000] [7] [INFO] Booting worker with pid: 7
[2025-03-13 15:33:17 +0000] [8] [INFO] Booting worker with pid: 8
[2025-03-13 15:33:24,546] WARNING in user_agent: USER_AGENT environment variable not set, consider setting it to identify your requests.
[2025-03-13 15:33:24,546] WARNING in user_agent: USER_AGENT environment variable not set, consider setting it to identify your requests.

@dartpain
Copy link
Contributor

Looks like env vars did not pass correctly to the app.

  1. Try the latest version, there was a fix that helps update some env vars
  2. Make sure you restart the shell where you are launching docsgpt after your changes to the .env file
  3. Try logging the value of settings.LLM_NAME from within the app.
    Please try my suggestions in order to make sure it runs well.

@DrNokkel
Copy link
Author

These are the backend logs after the new install

Got another error message

"Please try again later. We apologize for any inconvinience."

/DocsGPT$ docker logs 96f9e8caed43
[2025-03-18 14:38:21 +0000] [1] [INFO] Starting gunicorn 23.0.0
[2025-03-18 14:38:21 +0000] [1] [INFO] Listening at: http://0.0.0.0:7091 (1)
[2025-03-18 14:38:21 +0000] [1] [INFO] Using worker: sync
[2025-03-18 14:38:21 +0000] [7] [INFO] Booting worker with pid: 7
[2025-03-18 14:38:21 +0000] [8] [INFO] Booting worker with pid: 8
[2025-03-18 14:38:24,846] WARNING in user_agent: USER_AGENT environment variable not set, consider setting it to identify your requests.
[2025-03-18 14:38:24,880] WARNING in user_agent: USER_AGENT environment variable not set, consider setting it to identify your requests.
[2025-03-18 14:42:47,425] INFO in routes: /stream - request_data: {'question': 'hi', 'history': '[{"prompt":"hi"}]', 'conversation_id': None, 'prompt_id': 'default', 'chunks': '2', 'token_limit': 2000, 'isNoneDoc': True}, source: {}
[2025-03-18 14:42:47,467] INFO in logging: Starting activity: stream - 2ba54d89-4428-4818-8fe6-9e7339da614c - User: local
[2025-03-18 14:42:49,005] INFO in _base_client: Retrying request to /chat/completions in 0.401982 seconds
[2025-03-18 14:42:49,409] INFO in _base_client: Retrying request to /chat/completions in 0.773938 seconds
Error rephrasing query: Connection error.
[2025-03-18 14:42:50,192] INFO in _base_client: Retrying request to /chat/completions in 0.473776 seconds
[2025-03-18 14:42:50,667] INFO in _base_client: Retrying request to /chat/completions in 0.914955 seconds
[2025-03-18 14:42:51,584] ERROR in logging: Error in stream - 2ba54d89-4428-4818-8fe6-9e7339da614c: Connection error.
Traceback (most recent call last):
File "/venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 101, in map_httpcore_exceptions
yield
File "/venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 250, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 256, in handle_request
raise exc from None
File "/venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 236, in handle_request
response = connection.handle_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 101, in handle_request
raise exc
File "/venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 78, in handle_request
stream = self._connect(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 124, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpcore/_backends/sync.py", line 207, in connect_tcp
with map_exceptions(exc_map):
File "/usr/lib/python3.12/contextlib.py", line 158, in exit
self.gen.throw(value)
File "/venv/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 955, in _request
response = self._client.send(
^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpx/_client.py", line 914, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpx/_client.py", line 942, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpx/_client.py", line 1014, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 249, in handle_request
with map_httpcore_exceptions():
File "/usr/lib/python3.12/contextlib.py", line 158, in exit
self.gen.throw(value)
File "/venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 118, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/app/application/logging.py", line 96, in _consume_and_log
for item in generator:
File "/app/application/agents/classic_agent.py", line 29, in gen
yield from self._gen_inner(query, retriever, log_context)
File "/app/application/agents/classic_agent.py", line 92, in _gen_inner
resp = self._llm_handler(resp, tools_dict, messages_combine, log_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/application/agents/classic_agent.py", line 130, in _llm_handler
resp = self.llm_handler.handle_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/application/agents/llm_handler.py", line 74, in handle_response
for chunk in resp:
File "/app/application/usage.py", line 52, in wrapper
for r in result:
File "/app/application/cache.py", line 81, in wrapper
yield from func(self, model, messages, stream, tools, *args, **kwargs)
File "/app/application/llm/openai.py", line 123, in _raw_gen_stream
response = self.client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 279, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 914, in create
return self._post(
^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1242, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 919, in request
return self._request(
^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 979, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1057, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 979, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1057, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 989, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
[2025-03-18 14:42:51,608] ERROR in routes: Error in stream: Connection error.
[2025-03-18 14:42:51,611] ERROR in routes: Traceback (most recent call last):
File "/venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 101, in map_httpcore_exceptions
yield
File "/venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 250, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 256, in handle_request
raise exc from None
File "/venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 236, in handle_request
response = connection.handle_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 101, in handle_request
raise exc
File "/venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 78, in handle_request
stream = self._connect(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 124, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpcore/_backends/sync.py", line 207, in connect_tcp
with map_exceptions(exc_map):
File "/usr/lib/python3.12/contextlib.py", line 158, in exit
self.gen.throw(value)
File "/venv/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 955, in _request
response = self._client.send(
^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpx/_client.py", line 914, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpx/_client.py", line 942, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpx/_client.py", line 1014, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 249, in handle_request
with map_httpcore_exceptions():
File "/usr/lib/python3.12/contextlib.py", line 158, in exit
self.gen.throw(value)
File "/venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 118, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/app/application/api/answer/routes.py", line 237, in complete_stream
for line in answer:
File "/app/application/logging.py", line 87, in wrapper
yield from _consume_and_log(generator, context)
File "/app/application/logging.py", line 96, in _consume_and_log
for item in generator:
File "/app/application/agents/classic_agent.py", line 29, in gen
yield from self._gen_inner(query, retriever, log_context)
File "/app/application/agents/classic_agent.py", line 92, in _gen_inner
resp = self._llm_handler(resp, tools_dict, messages_combine, log_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/application/agents/classic_agent.py", line 130, in _llm_handler
resp = self.llm_handler.handle_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/application/agents/llm_handler.py", line 74, in handle_response
for chunk in resp:
File "/app/application/usage.py", line 52, in wrapper
for r in result:
File "/app/application/cache.py", line 81, in wrapper
yield from func(self, model, messages, stream, tools, *args, **kwargs)
File "/app/application/llm/openai.py", line 123, in _raw_gen_stream
response = self.client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 279, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 914, in create
return self._post(
^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1242, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 919, in request
return self._request(
^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 979, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1057, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 979, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1057, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/openai/_base_client.py", line 989, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants