Adaptive Probabilistic Networks

Stuart Russell, John Binder and Daphne Koller

EECS Department
University of California, Berkeley
Technical Report No. UCB/CSD-94-824
July 1994

http://www2.eecs.berkeley.edu/Pubs/TechRpts/1994/CSD-94-824.pdf

Belief networks (or probabilistic networks) and neural networks are two forms of network representations that have been used in the development of intelligent systems in the field of artificial intelligence. Belief networks provide a concise representation of general probability distributions over a set of random variables, and facilitate exact calculation of the impact of evidence on propositions of interest. Neural networks, which represent parameterized algebraic combinations of nonlinear activation functions, have found widespread use as models of real neural systems and as function approximators because of their amenability to simple training algorithms. Furthermore, the simple, local nature of most neural network training algorithms provides a certain biological plausibility and allows for a massively parallel implementation. In this paper, we show that similar local learning algorithms can be derived for belief networks, and that these learning algorithms can operate using only information that is directly available from the normal, inferential processes of the networks. This removes the main obstacle preventing belief networks from competing with neural networks on the above-mentioned tasks. The precise, local, probabilistic interpretation of belief networks also allows them to be partially or wholly constructed by humans; allows the results of learning to be easily understood; and allows them to contribute to rational decision-making in a well-defined way.


BibTeX citation:

@techreport{Russell:CSD-94-824,
    Author = {Russell, Stuart and Binder, John and Koller, Daphne},
    Title = {Adaptive Probabilistic Networks},
    Institution = {EECS Department, University of California, Berkeley},
    Year = {1994},
    Month = {Jul},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/1994/5493.html},
    Number = {UCB/CSD-94-824},
    Abstract = {Belief networks (or probabilistic networks) and neural networks are two forms of network representations that have been used in the development of intelligent systems in the field of artificial intelligence. Belief networks provide a concise representation of general probability distributions over a set of random variables, and facilitate exact calculation of the impact of evidence on propositions of interest. Neural networks, which represent parameterized algebraic combinations of nonlinear activation functions, have found widespread use as models of real neural systems and as function approximators because of their amenability to simple training algorithms. Furthermore, the simple, local nature of most neural network training algorithms provides a certain biological plausibility and allows for a massively parallel implementation. In this paper, we show that similar local learning algorithms can be derived for belief networks, and that these learning algorithms can operate using only information that is directly available from the normal, inferential processes of the networks. This removes the main obstacle preventing belief networks from competing with neural networks on the above-mentioned tasks. The precise, local, probabilistic interpretation of belief networks also allows them to be partially or wholly constructed by humans; allows the results of learning to be easily understood; and allows them to contribute to rational decision-making in a well-defined way.}
}

EndNote citation:

%0 Report
%A Russell, Stuart
%A Binder, John
%A Koller, Daphne
%T Adaptive Probabilistic Networks
%I EECS Department, University of California, Berkeley
%D 1994
%@ UCB/CSD-94-824
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/1994/5493.html
%F Russell:CSD-94-824