You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Where yolov8n.pt is the trained weights, the output format is specified as tflite, int8=True means the model will be quantized using 8-bits signed for the weights and the activations. #33
The exported model is already tf.lite of int8, and then execute the quantization script provided by you for quantization? Shouldn't it be exported to tf.lite of float32 and then quantized by executing your quantization script?