# 2009 Research Summary

## DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification

View Current Project Information

Simon Lacoste-Julien, Fei Sha^{1} and Michael Jordan

Google and Microsoft

Probabilistic topic models (and their extensions) have become popular as models of latent structures in collections of text documents or images. These models are usually treated as generative models and trained using maximum likelihood estimation, an approach which may be suboptimal in the context of an overall classification problem.

In this project, we present DiscLDA [1], a discriminative learning framework for such models as Latent Dirichlet Allocation (LDA) in the setting of dimensionality reduction with supervised side information. In DiscLDA, a class-dependent linear transformation is introduced on the topic mixture proportions (see Figure 1). This parameter is estimated by maximizing the conditional likelihood using Monte Carlo EM. By using the transformed topic mixture proportions as a new representation of documents, we obtain a supervised dimensionality reduction algorithm that uncovers the latent structure in a document collection while preserving predictive power for the task of classification. We compare the predictive power of the latent structure of DiscLDA with unsupervised LDA on the 20 Newsgroup document classification task and show that it uncovers an interesting latent structure which preserves good classification accuracy.

Figure 1: Graphical model for DiscLDA

- [1]
- S. Lacoste-Julien, F. Sha, and M. Jordan, "DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification,"
*Advances in Neural Information Processing Systems (NIPS) 21,*2009.

^{1}University of Southern California