Loading Events

AI Seminar

Using Motion to Understand Objects in the Real World

David HeldComputer Science Ph.D. StudentStanford Univeristy
SHARE:

Many robots today are confined to operate in relatively simple, controlled environments. One reason for this is that current methods for processing visual data tend to break down when faced with occlusions, viewpoint changes, poor lighting, and other challenging but common situations that occur when robots are placed in the real world. I will show that we can train robots to handle these variations by inferring the causes behind visual appearance changes. If we model how the world changes over time, we can be robust to the types of changes that objects often undergo. I demonstrate this idea on a number of applications, including 3D velocity estimation and segmentation for autonomous driving as well as 2D tracking with neural networks. By inferring the causes of appearance changes over time, we can make our methods more robust to a variety of challenging situations that commonly occur in the real-world, thus enabling robots to come out of the factory and into our lives.
David Held is a Computer Science Ph.D. student at Stanford doing research at the intersection of robotics, computer vision, and machine learning. He is co-advised by Sebastian Thrun and Silvio Savarese. David has also interned at Google, working on the self-driving car project. Before Stanford, he worked as a software developer for a startup company and was a researcher at the Weizmann Institute, working on building a robotic octopus. He received a B.S. in Mechanical Engineering at MIT in 2005, an M.S. in Mechanical Engineering at MIT 2007, and an M.S. in Computer Science at Stanford in 2012, for which he was awarded the Best Master's Thesis Award from the Computer Science Department.

Sponsored by

Toyota