A novel semi-supervised detection approach with weak annotation

Eric K. Tokuda, Gabriel B.A. Ferreira, Claudio Silva, Roberto M. Cesar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this work we propose a semi-supervised learning approach for object detection where we use detections from a preexisting detector to train a new detector. We differ from previous works by coming up with a relative quality metric which involves simpler labeling and by proposing a full framework of automatic generation of improved detectors. To validate our method, we collected a comprehensive dataset of more than two thousand hours of streaming from public traffic cameras that contemplates variations in time, location and weather. We used these data to generate and assess with weak labeling a car detector that outperforms popular detectors on hard situations such as rainy weather and low resolution images. Experimental results are reported, thus corroborating the relevance of the proposed approach.

Original languageEnglish (US)
Title of host publication2018 IEEE Southwest Symposium on Image Analysis and Interpretation, SSIAI 2018 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages129-132
Number of pages4
ISBN (Electronic)9781538665688
DOIs
StatePublished - Sep 21 2018
Event2018 IEEE Southwest Symposium on Image Analysis and Interpretation, SSIAI 2018 - Las Vegas, United States
Duration: Apr 8 2018Apr 10 2018

Publication series

NameProceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation
Volume2018-April

Other

Other2018 IEEE Southwest Symposium on Image Analysis and Interpretation, SSIAI 2018
Country/TerritoryUnited States
CityLas Vegas
Period4/8/184/10/18

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'A novel semi-supervised detection approach with weak annotation'. Together they form a unique fingerprint.

Cite this