I want to use python onnxruntime within a flutter application to load machine learning models within device. #5225
Replies: 1 comment 1 reply
-
At my last look at Onnx on embedded devices/Android/iOS, you needed to compile binaries from source(ie. they haven't got arm64-v8a binaries). I would build that wheel (C++), create the Python bindings, and then have it contained within your package. I think the key question from a Flet POV would be whether you could place something like this within your reqs.txt / .toml requirements when working on an Android build: I would guess that you can - as you can on MacOS/Windows, and I use custom wheels/binaries. But I haven't done Flet -> Android. Answering your question, I think the most relevant link to start this journey is this: |
Beta Was this translation helpful? Give feedback.
-
I tried to install onnxruntime but its not resolved, So i wanted to be clear either.
Beta Was this translation helpful? Give feedback.
All reactions