You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, inferring actions is based on a video to judge actions. How can I do the following: Read the video stream from the RTSP Link, create a frame buffer, and when there are enough frames, I will generate an action label (Unlike today, putting in a video, whether long or short, only generates 1 action). I hope to hear from you soon and thanks for your help. Best wishes
What is the feature?
Real-time inference when a sufficient number of frames is available
What alternatives have you considered?
No response
The text was updated successfully, but these errors were encountered:
What is the problem this feature will solve?
Currently, inferring actions is based on a video to judge actions. How can I do the following: Read the video stream from the RTSP Link, create a frame buffer, and when there are enough frames, I will generate an action label (Unlike today, putting in a video, whether long or short, only generates 1 action). I hope to hear from you soon and thanks for your help. Best wishes
What is the feature?
Real-time inference when a sufficient number of frames is available
What alternatives have you considered?
No response
The text was updated successfully, but these errors were encountered: