llama 0.16.0 on 8xPi 4 4GB -> Critical error: Cannot open model file #260
somera
started this conversation in
Discussion
Replies: 1 comment 26 replies
-
|
Hey @somera, I think you could try to use absolute paths: realpath models/qwen3_1.7b_q40/dllama_tokenizer_qwen3_1.7b_q40.t => <absolute_path>sudo nice -n -20 ./dllama inference ... --model <absolute_path> --tokenizer <absolute_path>Make sure the model/tokenizer files are downloaded to the root node, workers don’t need them. |
Beta Was this translation helpful? Give feedback.
26 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment



Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have 8x Raspberry Pi 4 Model B Rev 1.1 4GB with:
I followed this https://github.com/b4rtaz/distributed-llama/blob/main/docs/HOW_TO_RUN_RASPBERRYPI.md instructions.
I could compile the sources (
* (HEAD detached at v0.16.0)).And I downloaded two models:
Than I started 7 workers:
But than on root mode ...
or
And the model file is here:
Beta Was this translation helpful? Give feedback.
All reactions