Description
Hello all,
i face a strange problem then I want to run hayhooks server and also specify the pipeline directory for it. I run it with docker:
docker run --rm -it \
-v "$(pwd)":/workspaces/<project> \
-w /workspaces/<project> \
-p 1416:1416 \
<my_image>:latest \
bash -c "hayhooks run --host 0.0.0.0 --port 1416 --pipelines-dir /workspaces<project>/src/api"
The output is:
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
/usr/local/lib/python3.12/site-packages/hayhooks/settings.py:65: UserWarning: Using default CORS settings - All origins, methods, and headers are allowed.
warn("Using default CORS settings - All origins, methods, and headers are allowed.", UserWarning)
INFO: Started server process [1]
INFO: Waiting for application startup.
2025-05-28 10:59:25 | INFO | hayhooks.server.app:deploy_pipelines:102 | Pipelines dir set to: /workspaces/<project>/src/api
2025-05-28 10:59:25 | INFO | hayhooks.server.app:deploy_pipelines:109 | Deploying 1 pipeline(s) from YAML files
Some secret values are not encrypted. Please use `Secret` class to encrypt them. The best way to implement it is to use `Secret.from_env` to load from environment variables. For example:
from haystack.utils import Secret
token = Secret.from_env('YOUR_TOKEN_ENV_VAR_NAME')
2025-05-28 10:59:28 | WARNING | hayhooks.server.utils.create_valid_type:handle_unsupported_types:31 | Skipping callable type: typing.Optional[typing.Callable[[haystack.dataclasses.streaming_chunk.StreamingChunk], NoneType]]
2025-05-28 10:59:28 | INFO | hayhooks.server.app:deploy_yaml_pipeline:37 | Deployed pipeline from yaml: bot_pipeline
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:1416 (Press CTRL+C to quit)
in the api directory I have a pipeline_wrapper.py
script and the bot_pipeline.yml
file. On the host system I then want to call the endpoint with:
curl -X POST http://0.0.0.0:1416/bot_pipeline/run \
-H "Content-Type: application/json" \
-d '{
"inputs": {
"query": "My question",
"memory": [
]
}
}'
and this now returns {"detail":"Not Found"}
.
I tried a different approach like that:
docker run --rm -it \
--name <project> \
-v "$(pwd)":/workspaces/<project> \
-w /workspaces/<project> \
-p 1416:1416 \
<my_image>:latest \
bash -c "hayhooks run --host 0.0.0.0 --port 1416"
Then exec into the container with docker exec -it 2a9e514dc97b /bin/bash
and run hayhooks pipeline deploy-files -n mypipeline /workspaces/<project>/src/api
.
I get a success message.
Quickly checking the new logs from the Docker container:
2025-05-28 12:28:03 | SUCCESS | hayhooks.server.utils.deploy_utils:add_pipeline_to_registry:399 | Pipeline successfully added to registry - {'pipeline_name': 'mypipeline', 'pipeline_dir': '/usr/local/lib/python3.12/pipelines/<project>', 'files': ['pipeline_wrapper.py', 'bot_pipeline.yml']}
And now if I run the request again inside and outside the container I get a response.
In short, the problem lies with the extra parameter --pipelines-dir, if I skip it and run the hayhooks pipeline deploy-files ..
command inside the container - no problem.
But I want to start the deploy the pipeline with a simple docker run command. Does anybody know how whats the problem here?