Loading Events

Communications and Signal Processing Seminar

Adaptive Discretization For Reinforcement Learning

Christina Lee YuAssistant ProfessorOperations Research and Information Engineering (ORIE), Cornell University
WHERE:
Remote/Virtual
SHARE:

Abstract: We introduce the technique of adaptive discretization to design efficient model-free and model-based episodic reinforcement learning algorithms in large (potentially continuous) state-action spaces. We provide worst-case regret bounds for our algorithms, which are competitive compared to the state-of-the-art algorithms. Our algorithms have lower storage and computational requirements due to maintaining a more efficient partition of the state and action spaces. We illustrate this via experiments on several canonical control problems, which shows that our algorithms empirically perform significantly better than fixed discretization in terms of both faster convergence and lower memory usage.This is joint work with Sean Sinclair, Tianyu Wang, Gauri Jain, and Siddhartha Banerjee.

Bio: Christina Lee Yu is an Assistant Professor at Cornell University in the School of Operations Research and Information Engineering. Prior to Cornell, she was a postdoc at Microsoft Research New England. She received her PhD in 2017 and MS in 2013 in Electrical Engineering and Computer Science from Massachusetts Institute of Technology in the Laboratory for Information and Decision Systems. She received her BS in Computer Science from California Institute of Technology in 2011. She received honorable mention for the 2018 INFORMS Dantzig Dissertation Award. Her recent interests include matrix and tensor estimation, multi-arm bandits, and reinforcement learning.

Join Zoom Meeting https://umich.zoom.us/j/97598571292

Meeting ID: 975 9857 1292

Passcode: XXXXXX (Will be sent via email to attendees)

 

See full seminar by Professor Yu