Loading Events

AI Seminar

Reasoning about Rationality and Beliefs

Avi Pfeffer
SHARE:

Situations involving strategic interactions, involving both humans and computers, abound. In order to design computer agents that perform well in these situations, we need to model the way other agents make their decisions. Classical game theory provides one well-established approach to doing this, but it generally assumes that agents are rational and have perfect knowledge of the working of the game, assumptions that often do not hold for real agents. Bayesian games go much of the way towards addressing these issues, but they are an unnatural, unwieldy, and sometimes unnecessarily large representation. We present a new language for modeling the beliefs and decision-making processes of agents. Our language is a network of models, where each model represents a different version of how the world works and how decisions are made. In one model, an agent may believe that another agent uses a different model to make its decisions, and may also have uncertainty about which model the other agent uses. In modeling irrationality, our language makes the distinction between the strategy for the agent that is the best response to its particular beliefs, and the strategy actually played by the agent. We present a notion of equilibrium that relates both kinds of strategies. We argue that the language is more natural than Bayesian games, and more precise in modeling irrational behavior. In some cases, models in the language are also exponentially smaller.

We have applied these ideas to modeling the way people negotiate. We learned a model in which there were several possible procedures that people could use to make their decisions. Using our learned model, we designed a negotiation agent, and showed that our agent outperforms other game-theoretic agents, and even outperforms humans.
Associate Professor of Computer Science
Division of Engineering and Applied Sciences
Harvard University

Sponsored by

AI Lab