TY - GEN
T1 - ReSeg
T2 - 29th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2016
AU - Visin, Francesco
AU - Romero, Adriana
AU - Cho, Kyunghyun
AU - Matteucci, Matteo
AU - Ciccone, Marco
AU - Kastner, Kyle
AU - Bengio, Yoshua
AU - Courville, Aaron
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/12/16
Y1 - 2016/12/16
N2 - We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally and vertically in both directions, encoding patches or activations, and providing relevant global information. Moreover, ReNet layers are stacked on top of pre-trained convolutional layers, benefiting from generic local features. Upsampling layers follow ReNet layers to recover the original image resolution in the final predictions. The proposed ReSeg architecture is efficient, flexible and suitable for a variety of semantic segmentation tasks. We evaluate ReSeg on several widely-used semantic segmentation datasets: Weizmann Horse, Oxford Flower, and CamVid, achieving stateof-the-art performance. Results show that ReSeg can act as a suitable architecture for semantic segmentation tasks, and may have further applications in other structured prediction problems. The source code and model hyperparameters are available on https://github.com/fvisin/reseg.
AB - We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally and vertically in both directions, encoding patches or activations, and providing relevant global information. Moreover, ReNet layers are stacked on top of pre-trained convolutional layers, benefiting from generic local features. Upsampling layers follow ReNet layers to recover the original image resolution in the final predictions. The proposed ReSeg architecture is efficient, flexible and suitable for a variety of semantic segmentation tasks. We evaluate ReSeg on several widely-used semantic segmentation datasets: Weizmann Horse, Oxford Flower, and CamVid, achieving stateof-the-art performance. Results show that ReSeg can act as a suitable architecture for semantic segmentation tasks, and may have further applications in other structured prediction problems. The source code and model hyperparameters are available on https://github.com/fvisin/reseg.
UR - http://www.scopus.com/inward/record.url?scp=85010223513&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85010223513&partnerID=8YFLogxK
U2 - 10.1109/CVPRW.2016.60
DO - 10.1109/CVPRW.2016.60
M3 - Conference contribution
AN - SCOPUS:85010223513
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 426
EP - 433
BT - Proceedings - 29th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2016
PB - IEEE Computer Society
Y2 - 26 June 2016 through 1 July 2016
ER -