Loading Events

Communications and Signal Processing Seminar

On the Emergence of Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Networks

Qing QuAssistant Professor, Electrical Engineering & Computer ScienceUniversity of Michigan
WHERE:
3427 EECS BuildingMap
SHARE:

Abstract: Over the past few years, an extensively studied phenomenon in training deep networks is the implicit bias of gradient descent towards parsimonious solutions. In this work, we first investigate this phenomenon by narrowing our focus to deep linear networks. Through our analysis, we reveal a surprising “law of parsimony” in the learning dynamics when the data possesses low-dimensional structures. Specifically, we show that the evolution of gradient descent starting from orthogonal initialization only affects a minimal portion of singular vector spaces across all weight matrices. In other words, the learning process happens only within a small invariant subspace of each weight matrix, even though all weight parameters are updated throughout training. This simplicity in learning dynamics could have significant implications for both efficient training and a better understanding of deep networks. First, the analysis enables us to considerably improve training efficiency by taking advantage of the low-dimensional structure in learning dynamics. We can construct smaller, equivalent deep linear networks without sacrificing the benefits associated with the wider counterparts. Moreover, we demonstrate the potential implications for efficient training of deep nonlinear networks. Second, it allows us to better understand deep representation learning by elucidating the linear progressive separation and concentration of representations from shallow to deep layers. The study paves the foundation for understanding hierarchical representations in deep nonlinear linear networks.

Bio: Qing Qu is an assistant professor in EECS department at the University of Michigan. Prior to that, he was a Moore-Sloan data science fellow at the Center for Data Science, New York University, from 2018 to 2020. He received his Ph.D. from Columbia University in Electrical Engineering in Oct. 2018. He received his B.Eng.from Tsinghua University in July 2011, and a M.Sc.from the Johns Hopkins University in Dec. 2012, both in Electrical and Computer Engineering. He interned at the U.S. Army Research Laboratory in 2012 and Microsoft Research in 2016, respectively. His research interest lies at the intersection of the foundation of data science, machine learning, numerical optimization, and signal/image processing, with a focus on developing efficient nonconvex methods and global optimality guarantees for solving representation learning and nonlinear inverse problems in engineering and imaging sciences. He is the recipient of the Best Student Paper Award at SPARS’15 (with Ju Sun, John Wright), and the recipient of a Microsoft PhD Fellowship in machine learning. He is the recipient of the NSF Career Award in 2022, and Amazon Research Award (AWS AI) in 2023.

***Event will take place in a hybrid format. The location for in-person attendance will be room 3427 EECS. Attendance will also be available via Zoom.

Join Zoom Meeting: https://umich.zoom.us/s/99102451525

Meeting ID: 991 0245 1525

Passcode: XXXXXX (Will be sent via e-mail to attendees)

Zoom Passcode information is also available upon request to: Sher Nickrand([email protected]) or Michele Feldkamp ([email protected]).

See full seminar by Professor Qu

Faculty Host

Lei YingProfessor, Electrical Engineering and Computer ScienceUniversity of Michigan, Electrical and Computer Engineering