Krishnendu Chatterjee and Thomas A. Henzinger

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2006-33

April 4, 2006

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-33.pdf

A stochastic graph game is played by two players on a game graph with probabilistic transitions. We consider stochastic graph games with $\omega$-regular winning conditions specified as Rabin or Streett objectives. These games are NP-complete and coNP-complete, respectively. The \emph{value} of the game for a player at a state $s$ given an objective $\Phi$ is the maximal probability that the player can guarantee the satisfaction of $\Phi$ from $s$. We present a strategy improvement algorithm to compute values in stochastic Rabin games, where an improvement step involves solving Markov decision processes (MDPs) and non-stochastic Rabin games. The algorithm also computes values for stochastic Streett games but does not directly yield an optimal strategy for Streett objectives. We then show how to obtain an optimal strategy for Streett objectives by solving certain non-stochastic Streett games.


BibTeX citation:

@techreport{Chatterjee:EECS-2006-33,
    Author= {Chatterjee, Krishnendu and Henzinger, Thomas A.},
    Title= {Strategy Improvement for Stochastic Rabin and Streett Games},
    Year= {2006},
    Month= {Apr},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-33.html},
    Number= {UCB/EECS-2006-33},
    Abstract= {A stochastic graph game is played by two players on a game graph with probabilistic transitions. We consider stochastic graph games with $\omega$-regular winning conditions specified as Rabin or Streett objectives. These games are NP-complete and coNP-complete, respectively. The \emph{value} of the game for a player at a state $s$ given an objective $\Phi$ is the maximal probability that the player can guarantee the satisfaction of $\Phi$ from $s$.  
We present a strategy improvement algorithm to compute values in stochastic Rabin games, where an improvement step involves solving Markov decision processes (MDPs) and non-stochastic Rabin games. The algorithm also computes values for stochastic Streett games but does not directly yield an optimal strategy for Streett objectives. We then show how to obtain an optimal strategy for Streett objectives by solving certain non-stochastic Streett games.},
}

EndNote citation:

%0 Report
%A Chatterjee, Krishnendu 
%A Henzinger, Thomas A. 
%T Strategy Improvement for Stochastic Rabin and Streett Games
%I EECS Department, University of California, Berkeley
%D 2006
%8 April 4
%@ UCB/EECS-2006-33
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-33.html
%F Chatterjee:EECS-2006-33