Faculty Candidate Seminar
Information Theoretic Limits in Learning: Applications to Privacy and Security
Add to Google Calendar
In the first part of the talk, we discuss the problem of recovering an alignment between two correlated graphs. A data set consisting of interactions between users can be interpreted as a graph. Under what conditions can such a data set be anonymized? When the same users appear in multiple such data sets, correlations in graph structure can potentially be used to link users and deanonymize them. We determine the amount of correlation that is information-theoretically required for successful graph alignment recovery. In the second part, we discuss adversarial machine learning and ask whether requiring a learned classifier to be adversarially robust leads to an increased sample complexity.
Daniel Cullina is a postdoctoral scholar in the Department of Electrical Engineering at Princeton University. He obtained a Ph.D. in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign in 2016 and a BS in Electrical Engineering from Caltech in 2010. His research applies tools from information theory and combinatorics to problems in machine learning, security, and privacy.