Skip to content

Correctness checking for security_barrier_camera_demo w/ 1 network multi channels with inputting 1 image #3392

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 53 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
8f579f2
Add some debug info in security_barrier_camera_demo.
yangwang201911 Jan 24, 2022
fef9a0a
Update.
yangwang201911 Mar 7, 2022
d101b67
Update the debug msg and Add result paser for security_barrier_camera…
yangwang201911 Mar 10, 2022
7d5ff26
Add correctness checker interface for demo.
yangwang201911 Mar 10, 2022
3551ada
Implement parser and correcteness checker for security_barrier_camera…
yangwang201911 Mar 14, 2022
3fb15e0
Add correctness checker script and instantiate checker of demo securi…
yangwang201911 Mar 16, 2022
6fb10ef
. exit when task list is empty and inputs source is image instead of …
yangwang201911 Mar 22, 2022
7eca858
Exit worker thread when the inferences of all frames have been comple…
yangwang201911 Mar 23, 2022
a2f8421
Update parameters of demo security_barrier_camera_demo so that just i…
yangwang201911 Mar 23, 2022
b0845bc
Merge branch 'master' into ywang2/analysis_result_automatically_for_A…
yangwang201911 Mar 23, 2022
cc86f1d
Add the comment of replacing model for the demo security_barrier_cam…
yangwang201911 Mar 23, 2022
37f8371
Input single image for demo security_barrier_camera_demo.
yangwang201911 Mar 23, 2022
237dcf3
Update.
yangwang201911 Mar 23, 2022
d0e831b
Update.
yangwang201911 Mar 23, 2022
6b9ce5a
Update.
yangwang201911 Mar 25, 2022
85327ef
Decouple of the raw data saving from the run_tests.py.
yangwang201911 Mar 25, 2022
ab9d19c
Update.
yangwang201911 Mar 28, 2022
f1c5c87
Add scope 'correctness' to enable correctness checking.
yangwang201911 Mar 31, 2022
b4d110b
Remove the log save for each demo and update the correctness checker.
yangwang201911 Apr 1, 2022
dbe9f13
Update.
yangwang201911 Apr 1, 2022
065ca07
Update format and remove some redundant code.
yangwang201911 Apr 1, 2022
3755135
Update.
yangwang201911 Apr 1, 2022
603c120
Revert the common thread.
yangwang201911 Apr 2, 2022
2e2ade4
Update.
yangwang201911 Apr 6, 2022
3596cd2
Merge branch 'master' into ywang2/analysis_result_automatically_for_A…
yangwang201911 Apr 6, 2022
a3e464a
Update correctness checker as the common measure for all demos.
yangwang201911 Apr 7, 2022
8631f93
1. Fix the issue that demo lost the inference of the last frame when …
yangwang201911 Apr 8, 2022
91baf7d
Updata correctness checker and revert inputing images hanlder for se…
yangwang201911 Apr 11, 2022
2fe3933
1. Update correctness checker to support the multi models inputting. …
yangwang201911 Apr 12, 2022
128ec7e
Merge branch 'master' into ywang2/analysis_result_automatically_for_A…
yangwang201911 Apr 12, 2022
0914845
Update exit code when correctness checking falied.
yangwang201911 Apr 13, 2022
796e315
Modify the input dataset path when updating option '-i' for demo.
yangwang201911 Apr 15, 2022
f33983d
Update correctness checker.
yangwang201911 Apr 15, 2022
7c47ce7
Correct the output layer order of the attributes model for the securi…
yangwang201911 Apr 24, 2022
2f066d1
1. Stop reborning if images frame ID is invalid. 2. clone image frame…
yangwang201911 Apr 26, 2022
1562b9b
Update correctness checking logic.
yangwang201911 Apr 28, 2022
d2d37e3
1. fix the bug in the security demo that lost the results of the infe…
yangwang201911 May 5, 2022
65fe33b
1. Throw the exception when parsing raw data failed. 2. Correct the v…
yangwang201911 May 6, 2022
1f44622
Add logic to check if the size of vehicle attributs is correct.
yangwang201911 May 7, 2022
0ff6d8c
Update correctness checking.
yangwang201911 May 9, 2022
3958ea4
Fix the hang issue when inputting images folder.
yangwang201911 May 23, 2022
42171be
Update correctness checking logic to handle the exception.
yangwang201911 May 25, 2022
ba629d0
Update.
yangwang201911 May 25, 2022
6c43933
Fix hange issue when inputting images folder.
yangwang201911 May 26, 2022
c10b3ce
Merge branch 'master' of https://github.com/openvinotoolkit/open_mode…
yangwang201911 Jun 24, 2022
5ccace4
Fix the run_tests.py terminated with exception when timeout occurs.
yangwang201911 Jun 28, 2022
a5d84dc
Update.
yangwang201911 Aug 15, 2022
77968e6
Merge branch 'master' into ywang2/analysis_result_automatically_for_A…
yangwang201911 Aug 15, 2022
7c57fd2
update.
yangwang201911 Aug 16, 2022
f7d7a1d
Update.
yangwang201911 Sep 13, 2022
bb8e615
Merge branch 'ywang2/fix_run_tests_terminated_with_exception_when_tim…
yangwang201911 Sep 13, 2022
19f9ff1
Update.
yangwang201911 Sep 13, 2022
8739a29
Merge branch 'ywang2/fix_run_tests_terminated_with_exception_when_tim…
yangwang201911 Sep 13, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions demos/common/cpp/utils/include/utils/input_wrappers.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -126,11 +126,13 @@ class ImageSource: public IInputSource {
return false;
} else {
subscribedInputChannels.erase(subscribedInputChannelsIt);
mat = im;
// clone to avoid that the image is shared and changed.
mat = im.clone();
return true;
}
} else {
mat = im;
// clone to avoid that the image is shared and changed.
mat = im.clone();
return true;
}
}
Expand Down
2 changes: 1 addition & 1 deletion demos/common/cpp/utils/src/args_helper.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ std::vector<std::string> parseDevices(const std::string& device_string) {
const std::string::size_type colon_position = device_string.find(":");
if (colon_position != std::string::npos) {
std::string device_type = device_string.substr(0, colon_position);
if (device_type == "HETERO" || device_type == "MULTI") {
if (device_type == "HETERO" || device_type == "MULTI" || device_type == "AUTO") {
std::string comma_separated_devices = device_string.substr(colon_position + 1);
std::vector<std::string> devices = split(comma_separated_devices, ',');
for (auto& device : devices)
Expand Down
120 changes: 111 additions & 9 deletions demos/security_barrier_camera_demo/cpp/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,8 @@ struct Context {
detectorsInfers.assign(detectorInferRequests);
attributesInfers.assign(attributesInferRequests);
platesInfers.assign(lprInferRequests);
totalInferFrameCounter = 0;
totalFrameCount = 0;
}

struct {
Expand Down Expand Up @@ -172,6 +174,11 @@ struct Context {
bool isVideo;
std::atomic<std::vector<ov::InferRequest>::size_type> freeDetectionInfersCount;
std::atomic<uint32_t> frameCounter;

// Record the inferred frames count
std::atomic<uint32_t> totalInferFrameCounter;
std::atomic<uint32_t> totalFrameCount;

InferRequestsContainer detectorsInfers, attributesInfers, platesInfers;
PerformanceMetrics metrics;
};
Expand Down Expand Up @@ -220,15 +227,34 @@ class ClassifiersAggregator {
std::mutex& printMutex = static_cast<ReborningVideoFrame*>(sharedVideoFrame.get())->context.classifiersAggregatorPrintMutex;
printMutex.lock();
if (FLAGS_r && !rawDetections.empty()) {
slog::debug << "Frame #: " << sharedVideoFrame->frameId << slog::endl;
slog::debug << rawDetections;
slog::debug << "ChannelId:" << sharedVideoFrame->sourceID << "," << "FrameId:" <<sharedVideoFrame->frameId << ",";
for (auto it = rawDetections.begin(); it != rawDetections.end(); ++it) {
if(it == std::prev(rawDetections.end()))
slog::debug << *it << "\t";
else
slog::debug << *it << ",";
}
// destructor assures that none uses the container
for (const std::string& rawAttribute : rawAttributes.container) {
slog::debug << rawAttribute << slog::endl;
// Format: ChannleId,FrameId,ObjectId,ObjectLable,Prob,roi_x,roi_y,roi_width,roi_high,[Vehicle Attributes],[License Plate]
for (auto it = rawAttributes.container.begin(); it != rawAttributes.container.end(); ++it) {
auto pos = it->find(":");
if(pos != std::string::npos) {
if(it == std::prev(rawAttributes.container.end()))
slog::debug << it->substr(pos + 1) << "\t";
else
slog::debug << it->substr(pos + 1) << ",";
}
}
for (const std::string& rawDecodedPlate : rawDecodedPlates.container) {
slog::debug << rawDecodedPlate << slog::endl;
for (auto it = rawDecodedPlates.container.begin(); it != rawDecodedPlates.container.end(); ++it) {
auto pos = it->find(":");
if(pos != std::string::npos) {
if(it == std::prev(rawDecodedPlates.container.end()))
slog::debug << it->substr(pos + 1);
else
slog::debug << it->substr(pos + 1) << ",";
}
}
slog::debug << slog::endl;
}
printMutex.unlock();
tryPush(static_cast<ReborningVideoFrame*>(sharedVideoFrame.get())->context.resAggregatorsWorker,
Expand Down Expand Up @@ -292,6 +318,9 @@ ReborningVideoFrame::~ReborningVideoFrame() {
context.videoFramesContext.lastFrameIdsMutexes[sourceID].lock();
const auto frameId = ++context.videoFramesContext.lastframeIds[sourceID];
context.videoFramesContext.lastFrameIdsMutexes[sourceID].unlock();
// Stop reborning if image frameId is invalid
if(!context.isVideo && frameId >= FLAGS_n_iqs)
return;
std::shared_ptr<ReborningVideoFrame> reborn = std::make_shared<ReborningVideoFrame>(context, sourceID, frameId, frame);
worker->push(std::make_shared<Reader>(reborn));
} catch (const std::bad_weak_ptr&) {}
Expand All @@ -305,6 +334,15 @@ bool Drawer::isReady() {
if (std::chrono::steady_clock::now() - prevShow > showPeriod) {
return true;
} else {
if (!context.isVideo) {
uint32_t totalInferFrameCounter = FLAGS_ni == 0 ? FLAGS_n_iqs * context.totalFrameCount : FLAGS_ni * FLAGS_n_iqs;
if (context.totalInferFrameCounter == totalInferFrameCounter) {
try {
std::shared_ptr<Worker>(context.drawersContext.drawersWorker)->stop();
}
catch (const std::bad_weak_ptr&) {}
}
}
return false;
}
} else {
Expand All @@ -314,6 +352,15 @@ bool Drawer::isReady() {
if (2 > gridMats.size()) { // buffer size
return true;
} else {
if (!context.isVideo) {
uint32_t totalInferFrameCounter = FLAGS_ni == 0 ? FLAGS_n_iqs * context.totalFrameCount : FLAGS_ni * FLAGS_n_iqs;
if (context.totalInferFrameCounter == totalInferFrameCounter) {
try {
std::shared_ptr<Worker>(context.drawersContext.drawersWorker)->stop();
}
catch (const std::bad_weak_ptr&) {}
}
}
return false;
}
} else {
Expand All @@ -322,6 +369,15 @@ bool Drawer::isReady() {
&& std::chrono::steady_clock::now() - prevShow > showPeriod) {
return true;
} else {
if (!context.isVideo) {
uint32_t totalInferFrameCounter = FLAGS_ni == 0 ? FLAGS_n_iqs * context.totalFrameCount : FLAGS_ni * FLAGS_n_iqs;
if (context.totalInferFrameCounter == totalInferFrameCounter) {
try {
std::shared_ptr<Worker>(context.drawersContext.drawersWorker)->stop();
}
catch (const std::bad_weak_ptr&) {}
}
}
return false;
}
} else {
Expand Down Expand Up @@ -378,6 +434,13 @@ void Drawer::process() {
}
} else {
if (!context.isVideo) {
// Calculate the inference count for the inputting images.
uint32_t totalInferFrameCounter = FLAGS_ni == 0 ? FLAGS_n_iqs * context.totalFrameCount : FLAGS_ni * FLAGS_n_iqs;
if (context.totalInferFrameCounter < totalInferFrameCounter)
{
context.drawersContext.drawerMutex.unlock();
return;
}
try {
std::shared_ptr<Worker>(context.drawersContext.drawersWorker)->stop();
}
Expand All @@ -388,6 +451,15 @@ void Drawer::process() {
gridMats.emplace((--gridMats.end())->first + 1, firstGridIt->second);
gridMats.erase(firstGridIt);
}
if (!context.isVideo) {
uint32_t totalInferFrameCounter = FLAGS_ni == 0 ? FLAGS_n_iqs * context.totalFrameCount : FLAGS_ni * FLAGS_n_iqs;
if (context.totalInferFrameCounter == totalInferFrameCounter) {
try {
std::shared_ptr<Worker>(context.drawersContext.drawersWorker)->stop();
}
catch (const std::bad_weak_ptr&) {}
}
}
context.drawersContext.drawerMutex.unlock();
}

Expand Down Expand Up @@ -429,6 +501,7 @@ bool DetectionsProcessor::isReady() {
classifiersAggregator = std::make_shared<ClassifiersAggregator>(sharedVideoFrame);
std::list<Detector::Result> results;
results = context.inferTasksContext.detector.getResults(*inferRequest, sharedVideoFrame->frame.size(), classifiersAggregator->rawDetections);

for (Detector::Result result : results) {
switch (result.label) {
case 1:
Expand Down Expand Up @@ -489,6 +562,8 @@ void DetectionsProcessor::process() {
const cv::Rect vehicleRect = *vehicleRectsIt;
ov::InferRequest& attributesRequest = *attributesRequestIt;
context.detectionsProcessorsContext.vehicleAttributesClassifier.setImage(attributesRequest, sharedVideoFrame->frame, vehicleRect);
// Decrease total inferred frames count by 1 when the ROI of frame has availiable attributes.
context.totalInferFrameCounter--;

attributesRequest.set_callback(
std::bind(
Expand All @@ -508,6 +583,8 @@ void DetectionsProcessor::process() {
classifiersAggregator->push(
BboxAndDescr{BboxAndDescr::ObjectType::VEHICLE, rect, attributes.first + ' ' + attributes.second});
context.attributesInfers.inferRequests.lockedPushBack(attributesRequest);
// Increased the total inferred frames count by 1 when attributes classification is done.
context.totalInferFrameCounter++;
}, classifiersAggregator,
std::ref(attributesRequest),
vehicleRect,
Expand All @@ -528,7 +605,8 @@ void DetectionsProcessor::process() {
const cv::Rect plateRect = *plateRectsIt;
ov::InferRequest& lprRequest = *lprRequestsIt;
context.detectionsProcessorsContext.lpr.setImage(lprRequest, sharedVideoFrame->frame, plateRect);

// Decrease the total inferred frames count by 1 when the ROI of frame has license plate.
context.totalInferFrameCounter--;
lprRequest.set_callback(
std::bind(
[](std::shared_ptr<ClassifiersAggregator> classifiersAggregator,
Expand All @@ -544,6 +622,8 @@ void DetectionsProcessor::process() {
}
classifiersAggregator->push(BboxAndDescr{BboxAndDescr::ObjectType::PLATE, rect, std::move(result)});
context.platesInfers.inferRequests.lockedPushBack(lprRequest);
// Increased by 1 total inferred frames count by 1 when license plate recognization is done.
context.totalInferFrameCounter++;
}, classifiersAggregator,
std::ref(lprRequest),
plateRect,
Expand All @@ -562,6 +642,8 @@ void DetectionsProcessor::process() {
tryPush(context.detectionsProcessorsContext.detectionsProcessorsWorker,
std::make_shared<DetectionsProcessor>(sharedVideoFrame, std::move(classifiersAggregator), std::move(vehicleRects), std::move(plateRects)));
}
// Count the frames passed inference
context.totalInferFrameCounter++;
}

bool InferTask::isReady() {
Expand All @@ -584,10 +666,10 @@ void InferTask::process() {
InferRequestsContainer& detectorsInfers = context.detectorsInfers;
std::reference_wrapper<ov::InferRequest> inferRequest = detectorsInfers.inferRequests.container.back();
detectorsInfers.inferRequests.container.pop_back();

detectorsInfers.inferRequests.mutex.unlock();

context.inferTasksContext.detector.setImage(inferRequest, sharedVideoFrame->frame);

inferRequest.get().set_callback(
std::bind(
[](VideoFrame::Ptr sharedVideoFrame,
Expand Down Expand Up @@ -628,6 +710,19 @@ void Reader::process() {
context.readersContext.lastCapturedFrameIds[sourceID]++;
context.readersContext.lastCapturedFrameIdsMutexes[sourceID].unlock();
try {
// Calculate the inference count for the inputting video.
uint32_t totalInferFrameCounter = 0;
if (FLAGS_ni == 0)
totalInferFrameCounter = context.totalFrameCount;
else
totalInferFrameCounter = FLAGS_ni * context.totalFrameCount;

if (context.totalInferFrameCounter < totalInferFrameCounter)
{
// Rebron this invalid frame to end the worker at next time
std::shared_ptr<Worker>(context.drawersContext.drawersWorker)->push(std::make_shared<Reader>(sharedVideoFrame));
return;
}
std::shared_ptr<Worker>(context.drawersContext.drawersWorker)->stop();
} catch (const std::bad_weak_ptr&) {}
}
Expand Down Expand Up @@ -667,6 +762,8 @@ int main(int argc, char* argv[]) {
videoCapturSourcess.push_back(std::make_shared<VideoCaptureSource>(videoCapture, FLAGS_loop_video));
}
}

uint32_t totalFrameCount = 0;
for (const std::string& file : files) {
cv::Mat frame = cv::imread(file, cv::IMREAD_COLOR);
if (frame.empty()) {
Expand All @@ -676,8 +773,12 @@ int main(int argc, char* argv[]) {
return 1;
}
videoCapturSourcess.push_back(std::make_shared<VideoCaptureSource>(videoCapture, FLAGS_loop_video));
// Get the total frame count from this video
totalFrameCount = static_cast<uint32_t>(videoCapture.get(cv::CAP_PROP_FRAME_COUNT));
} else {
imageSourcess.push_back(std::make_shared<ImageSource>(frame, true));
// Get the total frame count from the inputting images
totalFrameCount++;
}
}
uint32_t channelsNum = 0 == FLAGS_ni ? videoCapturSourcess.size() + imageSourcess.size() : FLAGS_ni;
Expand Down Expand Up @@ -721,7 +822,6 @@ int main(int argc, char* argv[]) {
}
core.set_property("CPU", ov::affinity(ov::Affinity::NONE));
core.set_property("CPU", ov::streams::num((device_nstreams.count("CPU") > 0 ? ov::streams::Num(device_nstreams["CPU"]) : ov::streams::AUTO)));

device_nstreams["CPU"] = core.get_property("CPU", ov::streams::num);
}

Expand Down Expand Up @@ -795,6 +895,8 @@ int main(int argc, char* argv[]) {
nireq,
isVideo,
nclassifiersireq, nrecognizersireq};
// initilize the inputting frames count
context.totalFrameCount = totalFrameCount;
// Create a worker after a context because the context has only weak_ptr<Worker>, but the worker is going to
// indirectly store ReborningVideoFrames which have a reference to the context. So there won't be a situation
// when the context is destroyed and the worker still lives with its ReborningVideoFrames referring to the
Expand Down
Loading