MoDeep: A deep learning framework using motion features for human pose estimation

Arjun Jain, Jonathan Tompson, Yann LeCun, Christoph Bregler

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this work, we propose a novel and efficient method for articulated human pose estimation in videos using a convolutional network architecture, which incorporates both color and motion features. We propose a new human body pose dataset, FLIC-motion (This dataset can be downloaded from http://cs.nyu.edu/∼ajain/accv2014/.), that extends the FLIC dataset [1] with additional motion features. We apply our architecture to this dataset and report significantly better performance than current state-of-the-art pose detection systems.

Original languageEnglish (US)
Title of host publicationComputer Vision - ACCV 2014 - 12th Asian Conference on Computer Vision, Revised Selected Papers
EditorsMing-Hsuan Yang, Hideo Saito, Daniel Cremers, Ian Reid
PublisherSpringer Verlag
Pages302-315
Number of pages14
ISBN (Print)9783319168074
DOIs
StatePublished - 2015
Event12th Asian Conference on Computer Vision, ACCV 2014 - Singapore, Singapore
Duration: Nov 1 2014Nov 5 2014

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9004
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other12th Asian Conference on Computer Vision, ACCV 2014
CountrySingapore
CitySingapore
Period11/1/1411/5/14

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'MoDeep: A deep learning framework using motion features for human pose estimation'. Together they form a unique fingerprint.

Cite this