generated from ultralytics/template
-
-
Notifications
You must be signed in to change notification settings - Fork 68
Closed
Labels
HUBUltralytics HUB issuesUltralytics HUB issuesexportsModel exports (ONNX, TensorRT, TFLite, etc.)Model exports (ONNX, TensorRT, TFLite, etc.)questionFurther information is requestedFurther information is requested
Description
Hi there,
I have trained custom models (v11n/v8s) within ultralytics hub and they both work flawlessly when tested in the ultralytics hub IOS app. When using the downloaded CoreML version of the same models (nms=true) in a plain xcode-project with this YOLO dependency using YOLOCamera() it detects anything but the actual objects. Boundingboxes are all over the place. I attempted this, after I ran into the same issue using the model(s) in my own IOS-app, extracting results as ModelnameOutput and when extracting them as VNRecognizedObjectObservations.
Is there a crucial postprocessing step I could've overseen? Does the ultralytics hub app use a different detection extraction approach?
Metadata
Metadata
Assignees
Labels
HUBUltralytics HUB issuesUltralytics HUB issuesexportsModel exports (ONNX, TensorRT, TFLite, etc.)Model exports (ONNX, TensorRT, TFLite, etc.)questionFurther information is requestedFurther information is requested