Winter 2021: Adversarial Machine Learning
Winter 2021: Adversarial Machine Learning
This is a new special topics course that will look at recent advances in the field of adversarial machine learning, both from an attack and defense perspective. Deep neural networks (DNNs) are widely used in computer vision for both detecting and classifying objects and are relevant to emerging systems for autonomous driving. Unfortunately, there is a question of trust, are machine learning (ML) models sufficiently robust to make correct decisions when human safety is at risk? This course will examine research papers in this field looking at vulnerabilities or defenses in machine learning systems with respect to various types of attacks including data poisoning attacks during training time or during online learning, data perturbation attacks on a trained model to cause misclassifications, and deepfake attacks. Papers on bias and fairness in machine learning systems are also within scope.
The class will be conducted seminar style and involve presentations by students, discussions, and projects to help everyone in the class up to speed on the foundations and cutting-edge research in the field. Each group will be expected to share a summary of one attack paper and one defense paper and present the paper to the class during the semester. The group should attempt to reproduce a subset of the results of the paper being presented (or in the rare case that is not possible due to lack of datasets, sufficient detail, lack of computational resources, or models, another paper that is presented in the class).
More info