Loading Events

Other Seminar

Using Large Language Models to Understand Human Cognition

Sean S TrottAssistant ProfessorUniversity of California, San Diego
SHARE:

WHERE: East Hall 4448

Join us for the Winter 24 Foundations & Frontiers Speaker Series. Sean Trott (Assistant Professor, University of California, San Diego) will be joining us virtually in East Hall 4448.

The Foundations & Frontiers Speaker Series brings leading cognitive scientists to U-M to present a special pair of presentations on the same day. We will open the floor for Q&A after each session and provide PIZZA in between the two presentations. More information about the two talks can be found below.

The Foundations

Many debates in Cognitive Science—such as whether certain cognitive capacities are innate, or acquired through specific experiential input—are entrenched and difficult to resolve. A new paradigm attempts to address these debates using Large Language Models (LLMs) to test competing theories of human cognition. In particular, because (most) LLMs are trained on linguistic input alone, they serve as useful baselines: measures of what kinds of behaviors and capacities could in principle emerge purely from exposure to statistical patterns in language. In this talk, I discuss the motivations for such an approach, and briefly survey several examples from the literature. Finally, I discuss the relevant trade-offs and considerations that might inform a researcher’s decision about whether to use LLMs in their own research, including: the amount (and quality) of data an LLM has been trained on, issues of construct validity, and multimodal models.

The Frontiers

Humans often reason about the mental states of others, even when those mental states diverge from their own. The ability to reason about false beliefs—part of the broader constellation abilities that make up “Theory of Mind”—is viewed by many as playing a crucial role in social cognition. Yet there is considerable debate about whether this ability comes from. Some theories emphasize the role of innate biological endowments, while others emphasize the role of experience. In this talk, I consider a hypothesis about a specific kind of experience: language. To test this “language exposure hypothesis”, I use GPT-3, a Large Language Model (LLM) trained on linguistic input alone, and ask whether and to what extent such a system displays evidence consistent with Theory of Mind. The LLM displays above-chance performance on a number of tasks, but also falls short of human performance in multiple cases. I conclude by discussing the implications of these results for the language exposure hypothesis specifically, and for research on Theory of Mind more generally.

Organizer

Weinberg Institute for Cognitive Science