TY - GEN
T1 - Parts-based multi-task sparse learning for visual tracking
AU - Kang, Zhengjian
AU - Wong, Edward K.
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/12/9
Y1 - 2015/12/9
N2 - We present a novel parts-based multi-task sparse learning method for particle-filter-based tracking. In our method, candidate regions are divided into structured local parts which are then sparsely represented by a linear combination of atoms from dictionary templates. We consider parts in each particle as individual tasks and jointly incorporate intrinsic relationship between tasks across different parts and across different particles under a unified multi-task framework. Unlike most sparse-coding-based trackers that use holistic representation, we generate sparse coefficients from local parts, thereby allowing more flexibility. Furthermore, by introducing group sparse ℓ1,2 norm into the linear representation problem, our tracker is able to capture outlier tasks and identify partially occluded regions. The performance of the proposed tracker is empirically compared with state-of-the-art trackers on several challenging video sequences. Both quantitative and qualitative comparisons show that our tracker is superior and more robust.
AB - We present a novel parts-based multi-task sparse learning method for particle-filter-based tracking. In our method, candidate regions are divided into structured local parts which are then sparsely represented by a linear combination of atoms from dictionary templates. We consider parts in each particle as individual tasks and jointly incorporate intrinsic relationship between tasks across different parts and across different particles under a unified multi-task framework. Unlike most sparse-coding-based trackers that use holistic representation, we generate sparse coefficients from local parts, thereby allowing more flexibility. Furthermore, by introducing group sparse ℓ1,2 norm into the linear representation problem, our tracker is able to capture outlier tasks and identify partially occluded regions. The performance of the proposed tracker is empirically compared with state-of-the-art trackers on several challenging video sequences. Both quantitative and qualitative comparisons show that our tracker is superior and more robust.
KW - Multi-task learning
KW - particle filter
KW - parts-based model
KW - sparse representation
KW - visual tracking
UR - http://www.scopus.com/inward/record.url?scp=84956633353&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84956633353&partnerID=8YFLogxK
U2 - 10.1109/ICIP.2015.7351561
DO - 10.1109/ICIP.2015.7351561
M3 - Conference contribution
AN - SCOPUS:84956633353
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 4022
EP - 4026
BT - 2015 IEEE International Conference on Image Processing, ICIP 2015 - Proceedings
PB - IEEE Computer Society
T2 - IEEE International Conference on Image Processing, ICIP 2015
Y2 - 27 September 2015 through 30 September 2015
ER -