Loading Events

Communications and Signal Processing Seminar

How to make the gradient descent-ascent converge to local minimax optima

Donghwan KimAssociate ProfessorKAIST
WHERE:
1311 EECS Building
SHARE:

Abstract: Can we effectively train a generative adversarial network (GAN) (or equivalently, optimize a minimax problem), similarly to how we successfully train a classification neural network (or equivalently, minimize a function) using gradient methods? Currently, the answer is ‘No’. The remarkable success of gradient descent in minimization is supported by theoretical results; under mild conditions, gradient descent converges to a local minimizer, and almost surely avoids strict saddle points. However, comparable theoretical support for minimax optimization is currently lacking. This talk will discuss recent progress in addressing this gap using dynamical systems theory. Specifically, this talk will present new variants of gradient descent-ascent that, under mild conditions, converge to local minimax optima, where the existing gradient descent-ascent methods fail to do so.

Bio: Donghwan Kim is an Associate Professor in the Department of Mathematical Sciences at Korea Advanced Institute of Science and Technology (KAIST). He received his Ph.D. in Electrical Engineering and Computer Science from the University of Michigan, and his B.S. from Seoul National University. His current research focuses on optimization and generalization in machine learning.

*** This Event will take place in a hybrid format. The location for in-person attendance will be room 1311 EECS. Attendance will also be available via Zoom.

Join Zoom Meeting: https://umich.zoom.us/j/93679028340

Meeting ID: 936 7902 8340

Passcode: XXXXXX (Will be sent via e-mail to attendees)

Zoom Passcode information is also available upon request to Kristi Rieger([email protected])

Faculty Host

Jeff FesslerInterim Chair, Professor Electrical Engineering and Computer Science