Parvez Ahammad

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2008-90

August 4, 2008

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-90.pdf

Traditionally, taking experimental measurements of a physical or biological phenomenon was an expensive, laborious and very slow process. However, significant advances in device technologies and computational techniques have sharply reduced the costs of data collection. Capturing thousands of images of developing biological organisms, or recording enormous amounts of video footage from a network of cameras monitoring an observation space, or obtaining a large number of neural measurements of brain signal patterns via non-invasive devices are some of the examples of such data proliferation. Analyzing such large volumes of multi-dimensional data through expert supervision is neither scalable nor cost-effective. In this context, there is a need for systems that complement the expert user by learning meaningful and compact representations from large collections of multidimensional data (images, videos etc.) with minimal supervision. In this dissertation, we present minimally supervised solutions to two such scenarios generally encountered.

The first scenario arises when a large set of labeled noisy observations are available from a given class (or phenotype) with an unknown generative model. An interesting challenge here is to estimate the underlying generative model and the distributionover the distortion parameters that map the observed examples to the generative model. For example, this is the scenario encountered while attempting to construct high-throughput data-driven spatial gene expression atlases from many thousands of noisy images of Drosophila melanogaster imaginal discs. We discuss improvements to an existing information theoretic approach for joint pattern alignment (JPA) in order to address such high-throughput scenarios. Along with the discussion of the assumptions, advantages and limitations of our approach (Chapter 2), we show how this framework can be applied to a variety of applications (Chapters 3, 4, 5).

The second scenario arises when there are observations available from multiple classes (phenotypes) without any labels. An interesting challenge here is to estimate a data driven organizational hierarchy that facilitates efficient retrieval and easy browsing of the observations. For example, this is the scenario encountered while organizing large collections of unlabeled activity videos based on the spatio-temporal patterns, such as actions of human beings, embedded in the videos. We show how some insights from computer vision and data-compression can be efficiently leveraged to provide a high-speed and robust solution to the problem of content-based hierarchy estimation (based on action similarity) for large video collections with minimal user supervision (Chapter 6). We demonstrate the usefulness of our approach on a benchmark dataset of human action videos.

Advisors: S. Shankar Sastry


BibTeX citation:

@phdthesis{Ahammad:EECS-2008-90,
    Author= {Ahammad, Parvez},
    Title= {Learning Data Driven Representations from Large Collections of Multidimensional Patterns with Minimal Supervision},
    School= {EECS Department, University of California, Berkeley},
    Year= {2008},
    Month= {Aug},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-90.html},
    Number= {UCB/EECS-2008-90},
    Abstract= {Traditionally, taking experimental measurements of a physical or biological phenomenon was an expensive, laborious and very slow process. However, significant advances in device technologies and computational techniques have sharply reduced the costs of data collection. Capturing thousands of images of developing biological organisms, or recording enormous amounts of video footage from a network of cameras monitoring an observation space, or obtaining a large number of neural measurements of brain signal patterns via non-invasive devices are some of the examples of such data proliferation. Analyzing such large volumes of multi-dimensional data through expert supervision is neither scalable nor cost-effective. In this context, there is a need for systems that complement the expert user by learning meaningful and compact representations from large collections of multidimensional data (images, videos etc.) with minimal supervision. In this dissertation, we present minimally supervised solutions to two such scenarios generally encountered.

The first scenario arises when a large set of labeled noisy observations are available from a given class (or phenotype) with an unknown generative model. An interesting challenge here is to estimate the underlying generative model and the distributionover the distortion parameters that map the observed examples to the generative model. For example, this is the scenario encountered while attempting to construct high-throughput data-driven spatial gene expression atlases from many thousands of noisy images of Drosophila melanogaster imaginal discs. We discuss improvements to an existing information theoretic approach for joint pattern alignment (JPA) in order to address such high-throughput scenarios. Along with the discussion of the assumptions, advantages and limitations of our approach (Chapter 2), we show how this framework can be applied to a variety of applications (Chapters 3, 4, 5).

The second scenario arises when there are observations available from multiple classes (phenotypes) without any labels. An interesting challenge here is to estimate a data driven organizational hierarchy that facilitates efficient retrieval and easy browsing of the observations. For example, this is the scenario encountered while organizing large collections of unlabeled activity videos based on the spatio-temporal patterns, such as actions of human beings, embedded in the videos. We show how some insights from computer vision and data-compression can be efficiently leveraged to provide a high-speed and robust solution to the problem of content-based hierarchy estimation (based on action similarity) for large video collections with minimal user supervision (Chapter 6). We demonstrate the usefulness of our approach on a benchmark dataset of human action videos.},
}

EndNote citation:

%0 Thesis
%A Ahammad, Parvez 
%T Learning Data Driven Representations from Large Collections of Multidimensional Patterns with Minimal Supervision
%I EECS Department, University of California, Berkeley
%D 2008
%8 August 4
%@ UCB/EECS-2008-90
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-90.html
%F Ahammad:EECS-2008-90