TY - GEN
T1 - Learning spatiotemporal features with 3D convolutional networks
AU - Tran, Du
AU - Bourdev, Lubomir
AU - Fergus, Rob
AU - Torresani, Lorenzo
AU - Paluri, Manohar
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/2/17
Y1 - 2015/2/17
N2 - We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8% accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.
AB - We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8% accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.
UR - http://www.scopus.com/inward/record.url?scp=84973865953&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84973865953&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2015.510
DO - 10.1109/ICCV.2015.510
M3 - Conference contribution
AN - SCOPUS:84973865953
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 4489
EP - 4497
BT - 2015 International Conference on Computer Vision, ICCV 2015
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 15th IEEE International Conference on Computer Vision, ICCV 2015
Y2 - 11 December 2015 through 18 December 2015
ER -