Loading Events

AI Seminar

Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models

Samuel MarksPostdoctoral ResearcherNortheastern University
WHERE:
3725 Beyster BuildingMap
SHARE:
Location: BBB 3725
Zoom: https://umich.zoom.us/j/97709524869
Meeting ID: 977 0952 4869
Passcode: aiseminar

Abstract

We introduce methods for discovering and applying sparse feature circuits. These are causally implicated subnetworks of human-interpretable features for explaining language model behaviors. Circuits identified in prior work consist of polysemantic and difficult-to-interpret units like attention heads or neurons, rendering them unsuitable for many downstream applications. In contrast, sparse feature circuits enable detailed understanding of unanticipated mechanisms. Because they are based on fine-grained units, sparse feature circuits are useful for downstream tasks: We introduce SHIFT, where we improve the generalization of a classifier by ablating features that a human judges to be task-irrelevant. Finally, we demonstrate an entirely unsupervised and scalable interpretability pipeline by discovering thousands of sparse feature circuits for automatically discovered model behaviors.

Bio

Sam Marks is a postdoctoral researcher at Northeastern University working with David Bau on neural network interpretability. He is interested in applications of interpretability to AI safety, especially scalable oversight.

Organizer

AI Lab

Student Host

Martin Ziqiao MaAI Lab Seminar Tsar

Faculty Host

Wei HuAssistant Professor, Computer Science and EngineeringUniversity of Michigan