We present a novel parts-based multi-task sparse learning method for particle-filter-based tracking. In our method, candidate regions are divided into structured local parts which are then sparsely represented by a linear combination of atoms from dictionary templates. We consider parts in each particle as individual tasks and jointly incorporate intrinsic relationship between tasks across different parts and across different particles under a unified multi-task framework. Unlike most sparse-coding-based trackers that use holistic representation, we generate sparse coefficients from local parts, thereby allowing more flexibility. Furthermore, by introducing group sparse ℓ1,2 norm into the linear representation problem, our tracker is able to capture outlier tasks and identify partially occluded regions. The performance of the proposed tracker is empirically compared with state-of-the-art trackers on several challenging video sequences. Both quantitative and qualitative comparisons show that our tracker is superior and more robust.