Loading Events

AI Seminar

How can a robot learn the foundations of knowledge?

Benjamin KuipersProfessorComputer Science & Engineering
SHARE:

An embodied agent experiences the physical world through low-level sensory and motor interfaces (the "pixel level" ). However, in order to function intelligently, it must be able to describe its world in terms of higher-level concepts such as places, paths, objects, actions, goals, plans, and so on (the "object level" ). How can higher-level concepts such as these, that make up the foundation of commonsense knowledge, be learned from unguided experience at the pixel level? I will describe progress on providing a positive answer to this question.

This question is important in practical terms: As robots are developed with increasingly complex sensory and motor systems, and are expected to function over extended periods of time, it becomes impractical for human engineers to implement their high-level concepts and define how those concepts are grounded in sensorimotor interaction. The same question is also important in theory: Must the knowledge of an AI system necessarily be programmed in by a human being, or can the concepts at the foundation of commonsense knowledge be learned from unguided experience?

Sponsored by

Toyota