Jeffrey Donahue and Lisa Hendricks and Sergio Guadarrama and Marcus Rohrbach and Subhashini Venugopalan and Kate Saenko and Trevor Darrell

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2014-180

November 17, 2014

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-180.pdf

Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep'', are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep" in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.


BibTeX citation:

@techreport{Donahue:EECS-2014-180,
    Author= {Donahue, Jeffrey and Hendricks, Lisa and Guadarrama, Sergio and Rohrbach, Marcus and Venugopalan, Subhashini and Saenko, Kate and Darrell, Trevor},
    Title= {Long-term Recurrent Convolutional Networks for Visual Recognition and Description},
    Year= {2014},
    Month= {Nov},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-180.html},
    Number= {UCB/EECS-2014-180},
    Abstract= {Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep'', are effective for tasks involving sequences, visual and otherwise.  We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges.
In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep" in that they can be compositional in spatial and temporal "layers".  Such models may have advantages when target concepts are complex and/or training data are limited.  Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates.  Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation.  Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations.  Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.},
}

EndNote citation:

%0 Report
%A Donahue, Jeffrey 
%A Hendricks, Lisa 
%A Guadarrama, Sergio 
%A Rohrbach, Marcus 
%A Venugopalan, Subhashini 
%A Saenko, Kate 
%A Darrell, Trevor 
%T Long-term Recurrent Convolutional Networks for Visual Recognition and Description
%I EECS Department, University of California, Berkeley
%D 2014
%8 November 17
%@ UCB/EECS-2014-180
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-180.html
%F Donahue:EECS-2014-180