Electrical Engineering
      and Computer Sciences

Electrical Engineering and Computer Sciences


UC Berkeley


2009 Research Summary

Syntactic Machine Translation

View Current Project Information

Alexandre Bouchard-Cote, John Sturdy DeNero, Aria Delier Haghighi, Percy Shuo Liang and Daniel Klein

National Science Foundation

Machine Translation Projects

We work on a variety of sub-problems related to machine translation, including word alignment, estimation, decoding, and synchronous parsing. Currently, we are also building an in-house end-to-end syntactic translation system.

Translation Model Estimation

Applying discriminative training techniques for structured output domains, we investigated using the perceptron algorithm to learn millions of feature parameters in a translation system, including translation table probabilities and hyperparameters [1].

Machine translation does not have traditional supervised data sets: reference translations are not the only correct outputs of a system and do not provide information about sentence segmentation or transfer components. Additionally, reference sentences cannot always be generated by a standard phrase-based system's translation table. We considered three update strategies that address these issues:

  • Bold updating: update toward the highest scoring way of generating the reference sentence;
  • Local updating: update toward the item on a system's n-best list with the heighest BLEU score (or other error metric); and
  • Hybrid updating: update with bold old updating when the reference is reachable and local updating otherwise.

We have also explored generative techniques for estimating translation parameters. We studied why learning a phrase-level conditional translation model underperforms a model estimated heuristically from relative frequency statistics. The generative process underlying our model echoes assumptions made for phrase-based decoding [2].

The type of overfitting exhibited by this model, where equally appropriate rules of different sizes compete for probability mass, appears in other NLP areas such as data-oriented parsing and remains an engaging area for future research.

Synchronous Parsing

Parsing a bitext under a synchronous grammar is a task related to machine translation which underlies parameter estimation and learning syntax-aware word alignments. However, the asymptotic complexity, a sixth-order polynomial under many parsing formalisms, imposes severe computational challenges [3].

We have developed a general method for finding admissible A* heuristics in complex search domains that project onto multiple simpler domains. We applied this technique to synchronous parsing, projecting two monolingual PCFGs from a synchronous PCFG that together provide admissible estimates of the completion cost of a synchronous parse.

In our NAACL paper (Haghighi et al., 2007), we also apply this technique to lexicalized parsing.

Syntactic Machine Translation

We believe that natural language syntax plays an important role in the translation process and will likewise prove a useful source of information for machine translation. While all successful machine translation systems encode aspects of syntax implicitly, we are considering approaches that explicitly generate English syntax trees during the translation process.

Presently, we are focusing on leveraging our advances in parsing and word alignment to improve translation quality. Our preliminary results are promising, and we look forward to presenting them to the community soon.

P. Liang, A. Bouchard-Cote, D. Klein, and B. Taskar, "An End-to-End Discriminative Approach to Machine Translation," Proceedings of COLING-ACL, 2006. [pdf] [slides]
J. DeNero and D. Klein, "Tailoring Word Alignments to Syntactic Machine Translation," Proceedings of ACL, 2007. [pdf] [slides]
A. Haghighi, J. DeNero, and D. Klein, "Approximate Factoring for A* Search," Proceedings of HLT-NAACL, 2007. [pdf] [slides]