Andre Wibisono

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2016-78

May 13, 2016

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-78.pdf

The problem of learning from data is prevalent in the modern scientific age, and optimization provides a natural mathematical language for describing learning problems. We study some problems in learning and optimization from variational and dynamical perspectives, by identifying the optimal structure in the problems and leveraging the parallel results between continuous and discrete-time problems.

We begin by studying the class of accelerated methods in optimization from a continuous-time perspective. We show that there is a Lagrangian functional that we call the Bregman Lagrangian, which generates a family of dynamics via the variational principle of least action, and these dynamics are related via speeding up time. Furthermore, we provide a systematic methodology for discretizing the dynamics into the family of accelerated higher-order algorithms with matching convergence rates in discrete time. Our work illuminates two classes of natural dynamics for optimization, the gradient and Lagrangian dynamics.

Next, we study the problem of approximate inference in graphical models. We analyze reweighted Kikuchi approximation for estimating the log partition function, which approximates the entropy in the variational representation with a region graph decomposition. We establish sufficient conditions for the concavity of the objective function in terms of weight assignments in the Kikuchi expansion, and characterize the polytope of concavity in terms of the cycle structure of the region graph. We also provide an algorithm to find the global optimum and simulations to demonstrate the advantages of the reweighted Kikuchi approach.

Finally, we study the problem of minimax option pricing as an online learning game between Nature and an Investor. Whereas the classical Black-Scholes model assumes the price fluctuates continuously following the geometric Brownian motion, we consider a worst-case model in which Nature chooses a sequence of price fluctuations under a cumulative quadratic volatility constraint, possibly with jumps, while the Investor makes a sequence of hedging decisions. We show that even in this adversarial, non-stochastic framework, the value of the game converges to the Black-Scholes option price, and the Black-Scholes hedging strategy is near-optimal for the Investor.

Advisors: Michael Jordan


BibTeX citation:

@phdthesis{Wibisono:EECS-2016-78,
    Author= {Wibisono, Andre},
    Title= {Variational and Dynamical Perspectives On Learning and Optimization},
    School= {EECS Department, University of California, Berkeley},
    Year= {2016},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-78.html},
    Number= {UCB/EECS-2016-78},
    Abstract= {The problem of learning from data is prevalent in the modern scientific age, and optimization provides a natural mathematical language for describing learning problems. We study some problems in learning and optimization from variational and dynamical perspectives, by identifying the optimal structure in the problems and leveraging the parallel results between continuous and discrete-time problems.

We begin by studying the class of accelerated methods in optimization from a continuous-time perspective. We show that there is a Lagrangian functional that we call the Bregman Lagrangian, which generates a family of dynamics via the variational principle of least action, and these dynamics are related via speeding up time. Furthermore, we provide a systematic methodology for discretizing the dynamics into the family of accelerated higher-order algorithms with matching convergence rates in discrete time. Our work illuminates two classes of natural dynamics for optimization, the gradient and Lagrangian dynamics.

Next, we study the problem of approximate inference in graphical models. We analyze reweighted Kikuchi approximation for estimating the log partition function, which approximates the entropy in the variational representation with a region graph decomposition. We establish sufficient conditions for the concavity of the objective function in terms of weight assignments in the Kikuchi expansion, and characterize the polytope of concavity in terms of the cycle structure of the region graph. We also provide an algorithm to find the global optimum and simulations to demonstrate the advantages of the reweighted Kikuchi approach.

Finally, we study the problem of minimax option pricing as an online learning game between Nature and an Investor. Whereas the classical Black-Scholes model assumes the price fluctuates continuously following the geometric Brownian motion, we consider a worst-case model in which Nature chooses a sequence of price fluctuations under a cumulative quadratic volatility constraint, possibly with jumps, while the Investor makes a sequence of hedging decisions. We show that even in this adversarial, non-stochastic framework, the value of the game converges to the Black-Scholes option price, and the Black-Scholes hedging strategy is near-optimal for the Investor.},
}

EndNote citation:

%0 Thesis
%A Wibisono, Andre 
%T Variational and Dynamical Perspectives On Learning and Optimization
%I EECS Department, University of California, Berkeley
%D 2016
%8 May 13
%@ UCB/EECS-2016-78
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-78.html
%F Wibisono:EECS-2016-78