Joint Colloquium Distinguished Lecture Series
Unsupervised Feature Learning and Deep Learning
Wednesday, April 20, 2011
To address this, researchers have recently developed "unsupervised feature learning" and "deep learning" algorithms that can automatically learn feature representations from unlabeled data, thus bypassing much of this time-consuming engineering. Many of these algorithms are developed using simple simulations of cortical (brain) computations, and build on such ideas as sparse coding and deep belief networks. By doing so, they exploit large amounts of unlabeled data (which is cheap and easy to obtain) to learn a good feature representation. These methods have also surpassed the previous state-of-the-art on a number of problems in vision, audio, and text. In this talk, I describe some of the key ideas behind unsupervised feature learning and deep learning, and present a few algorithms. I also speculate on how large-scale brain simulations may enable us to make significant progress in machine learning and AI.
This talk will be broadly accessible, and will not assume a machine learning background.
Andrew Ng received his Ph.D. from Berkeley, and is now an Associate Professor of Computer Science at Stanford University where he works on machine learning and AI. His previous work includes autonomous helicopters, the STanford AI Robot (STAIR) project, and ROS (probably the most widely used open-source robotics software platform today). He current work focuses on neuroscience-informed deep learning and unsupervised feature learning algorithms. His group has won best paper/best student paper awards at ICML, ACL, CEAS, 3DRR. He is also a recipient of the Alfred P. Sloan Fellowship, and the 2009 IJCAI Computers and Thought award.
|Return to EECS Joint Colloquium|