AI Seminar
Low-Rank Spectral Learning for Predictive State Representations
Add to Google Calendar
In many settings we experience the world as a sequence of
observations. For instance, we might observe the daily weather, read
the words in a document, or receive a series of frames from a security
camera. We would like to extract from these data a model of the world
that allows us to make predictions: the probability of rain on Friday,
the next word likely to be typed by a smartphone user, or whether to
call the police.
Predictive state representations (PSRs), which generalize HMMs and
POMDPs, model these kinds of problems by using predictions about
future events as a representation of the current state. PSRs are
appealing in part because they arise naturally from a spectral
learning algorithm that computes PSR parameters in closed form using
data statistics, thus avoiding traditional optimization procedures
that are often slow and inexact. In theory, the spectral learning
algorithm is not only fast but also statistically consistent; however,
as I discussed at the AI seminar last year, the assumptions needed for
consistency are rarely if ever met in practice. Morever, when those
assumptions are even slightly violated, the learned parameters can be
arbitrarily bad.
In this talk I will describe our recent work addressing the use of
spectral learning for PSRs under more realistic assumptions. Our work
is based on a theoretical analysis of a particular limiting case for
spectral learning, which we show has some interesting and appealing
properties. This analysis motivates several practical techniques,
which we show lead to significantly better results on synthetic and
real-world data.
Alex Kulesza is a postdoc in CSE.