With the increased availability of real data for complex domains, it
is particularly appealing to automatically learn Bayesian network
structures that are sufficiently expressive but where inference is
still tractable. While the method of thin junction trees can, in
principle, be used for this purpose, its fully greedy nature makes it
somewhat prone to over-fitting, particularly when the data is sparse
and when the desired treewidth is not trivial.
In this talk I will present a novel method for efficiently learning Bayesian networks of bounded treewidth that employs global structure modifications. At the heart of our method is the idea of dynamically updating a triangulation in a way that facilitates the addition of optimal chain structures (with respect to some ordering) that are guaranteed to increase the treewidth bound of the model by at most one. We demonstrate the effectiveness of our "treewidth friendly" method on real-life datasets and show that it is superior to the greedy approach as soon as the bound on the treewidth is nontrivial. Importantly, we show that by employing global operators that are closer in spirit to those used by Chow and Liu for learning optimal trees, we are able to learn better networks even when the treewidth of the model is not bounded.