Communications and Signal Processing Seminar
Domain Adaptation for Recommender Systems
Add to Google Calendar
In this talk I will address two recent projects in my group that apply ideas from domain adaptation to problems in recommender systems: (1) The contextual bandit model is a framework for sequential decision making in which the decision-maker observes a feature vector (context) at each time step, and must choose an action to maximize a numerical reward function. We demonstrate how ideas from multitask learning can be applied to improve estimates of the reward function, leading to theoretical and empirical improvements in performance. (2) recent work in matrix completion has argued that the common low-ranked model for the ratings matrix is inaccurate because of the existence of an unknown monotone transformation underlying the ratings. We take this a step further and consider user-specific monotone transformations. A nearest-neighbor collaborative filtering algorithm is proposed and analyzed in this context. This is joint work with Aniket Deshmukh and Julian Katz-Samuels.
Clay Scott received his PhD in Electrical Engineering from Rice University in 2004, and joined the University of Michigan in 2006 with a primary appointment in EECS. His research interests focus on statistical machine learning theory and algorithms, with an emphasis on nonparametric methods for supervised and unsupervised learning. He has also worked on a number of applications stemming from various scientific disciplines, including brain imaging, nuclear threat detection, environmental monitoring, and computational biology. In 2010 he received the Career Award from the National Science Foundation.