Reinventing Partially Observable Reinforcement Learning
Add to Google Calendar
* Joint work with Allen Chang, Hannaneh Hajishirzi, Stuart Russell, Dafna Shahaf, and Afsaneh Shirazi (IJCAI'03, IJCAI'05, AAAI'06, ICAPS'06, IJCAI'07, AAAI'07).
Many complex domains offer limited information about their exact state and the way actions affect them. There, autonomous agents need to make decisions at the same time that they learn action models and track the state of the domain. This combined problem can be represented within the framework of reinforcement learning in POMDPs, and is known to be computationally difficult.
In this presentation I will describe a new framework for such decision making, learning, and tracking. This framework applies results that we achieved about updating logical formulas (belief states) after deterministic actions. It includes algorithms that represent and update the set of possible action models and world states compactly and tractably. It makes a decision with this set, and updates the set after taking the chosen action. Most importantly, and somewhat surprisingly, the number of actions that our framework takes to achieve a goal is bounded polynomially by the length of an optimal plan in a fully observable, fully known domain, under lax conditions.
Finally, our framework leads to a new stochastic-filtering approach that has better accuracy than previous techniques.
Eyal Amir is an Assistant Professor of Computer Science at the University of Illinois at Urbana-Champaign (UIUC) since January 2004. His research includes reasoning, learning, and decision making with logical and probabilistic knowledge, dynamic systems, and commonsense reasoning. Before UIUC he was a postdoctoral researcher at UC Berkeley (2001-2003) with Stuart Russell, and did his Ph.D. on logical reasoning in AI with John McCarthy. He received B.Sc. and M.Sc. degrees in mathematics and computer science from Bar-Ilan University, Israel in 1992 and 1994, respectively. Eyal is a Fellow of the Center for Advanced Studies and of the Beckman Institute at UIUC (2007-2008), was chosen by IEEE as one of the " 10 to watch in AI" (2006), received the NSF CAREER award (2006), and awarded the Arthur L. Samuel award for best Computer Science Ph.D. thesis (2001-2002) at Stanford University.