Replies: 14 comments
-
Hi @Duckypu First, I'd recommend to make sure that you are using the correct Firmware version with the correct SDK version we test only for that: Do you have this issue also when you try the model provided in the example? For debugging, upload first your model in the camera and run on the device |
Beta Was this translation helpful? Give feedback.
-
Hi @Corallo I did the prompt you mentioned
I got:
And then
I got:
In addition, I also tried to compare 'ssd_mobilenet_v2_coco_quant_postprocess.tflite" and "my_custom.tflite" in netron. ssd_mobilenet_v2_coco_quant_postprocess.tflite: Lastly, I'm happy to provide my unweighted model to you personally if you want. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the detailed report. This looks like a bug on our side. If you can't share publicly your model, the best is that you open a Ticket here: |
Beta Was this translation helpful? Give feedback.
-
Thank you, I've already opened a Ticket (#02150437) If you have any problem attaching models, Please let me know. |
Beta Was this translation helpful? Give feedback.
-
Hi @Duckypu Could you try to run again the larod-client command, like this: You can find the system log in the GUI going in System -> Logs -> View the system log |
Beta Was this translation helpful? Give feedback.
-
Hi @Corallo I'm glad to receive your messages. I did the prompt you mentioned
I got:
and then check the GUI in System -> Logs -> View the system log I got:
In my opinion, I don't think the issue is related to memory. After all, in the OS 10 version, the model could be successfully loaded. Or is it the case that there is a default model running on the device after the OS 11 version? |
Beta Was this translation helpful? Give feedback.
-
I have been trying your model on a P3265-LVE with 11.5 and 11.6, it works fine for me. The only thing I can reproduce is that warning message about OP32, even tho it doesn't result in a crash for me. |
Beta Was this translation helpful? Give feedback.
-
Hi @Corallo After digging deep into this, I found a mistake on my end. My model is actually P3265-LV, not P3265-LVE. Now, I've managed to successfully load the model with version 11.5.64 and SDK 1.9 randomly. However, even after making sure I have the correct firmware version, I'm still facing problems loading the model in other versions:
Additionally, I'd be happy to share the conversion method with you privately. Can I send it to you through a private channel? |
Beta Was this translation helpful? Give feedback.
-
Hi @Duckypu Yes, I actually tested on P3265-LV too, but the two device should be equivalent. What do you mean with "randomly"? It is not consistent/reproducible? |
Beta Was this translation helpful? Give feedback.
-
Hi @Corallo |
Beta Was this translation helpful? Give feedback.
-
@Duckypu |
Beta Was this translation helpful? Give feedback.
-
Hi @Corallo, I am facing the same issue when I try to load my custom model as well. |
Beta Was this translation helpful? Give feedback.
-
@ThenoobMario Please open a new discussion or issue and provide some more context :) |
Beta Was this translation helpful? Give feedback.
-
Moving this issue into discussions, as for now it doesn't seem like a bug. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Description
Thank you for your attention. I've trained custom ssdlite_mobiledet model using the TensorFlow API. Following the efforts of previous work, I made changes to the Dockerfile.model and env.aarch64.artpec8 paths, and I was able to successfully run it in the following environment:
However, when I upgraded Axis firmware to version 11, I encountered the following issue during inference:
Issue environment
(also tried 11.2.68 vs 1.6, 11.4.63 vs 1.8, 11.5.64 vs 1.10)
Please help me, thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions