Learning multi-scale sparse representation for visual tracking

Zhengjian Kang, Edward K. Wong

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    We present a novel algorithm for learning multi-scale sparse representation for visual tracking. In our method, sparse codes with max pooling are used to form a multi-scale representation that integrates spatial configuration over patches of different sizes. Different from other sparse representation methods, our method uses both holistic and local descriptors. In the hybrid framework, we formulate a new confidence measure that combines generative and discriminative confidence scores. We also devised an efficient method to update templates for adaptation to appearance changes. We demonstrate the effectiveness of our method with experiments and show that our method outperforms other state-of-the-art tracking algorithms.

    Original languageEnglish (US)
    Title of host publication2014 IEEE International Conference on Image Processing, ICIP 2014
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    Pages4897-4901
    Number of pages5
    ISBN (Electronic)9781479957514
    DOIs
    StatePublished - Jan 28 2014

    Publication series

    Name2014 IEEE International Conference on Image Processing, ICIP 2014

    Keywords

    • Multi-scale sparse representation
    • max pooling
    • visual tracking

    ASJC Scopus subject areas

    • Computer Vision and Pattern Recognition

    Fingerprint

    Dive into the research topics of 'Learning multi-scale sparse representation for visual tracking'. Together they form a unique fingerprint.

    Cite this