Loading Events

Communications and Signal Processing Seminar

Multiclass and Multi-Label Learning with General Losses: What is the Right Output Coding and Decoding?

Shivani AgarwalRachleff Family Associate Professor of Computer and Information ScienceUniversity of Pennsylvania
WHERE:
Remote/Virtual
SHARE:

ABSTRACT: Many practical applications of machine learning involve multiclass learning problems with a large number of classes — indeed, multi-label learning problems can be viewed as a special case. Multiclass learning with the standard 0-1 loss is fairly well understood; however, in practice, applications with large numbers of classes often require performance to be measured via a different, problem-specific loss. What is the right way to design principled and efficient learning algorithms for multiclass (and multi-label) problems with general losses? 

From a theoretical standpoint, an elegant approach for designing statistically consistent learning algorithms is via the design of convex calibrated surrogate losses. From a practical standpoint, an approach that is often favored is that of output coding, which reduces multiclass learning to a set of simpler binary classification problems. In this talk, I will discuss recent progress in bringing together these seemingly disparate approaches under a unifying lens to develop statistically consistent and computationally efficient learning algorithms for a wide range of problems, in some cases recovering existing state-of-the-art algorithms, and in other cases providing new ones. Our algorithms require learning at most r real-valued scoring functions, where r is the rank of the target loss matrix, and come with corresponding principled decoding schemes. I will also discuss connections with the field of property elicitation, and new tools for deriving quantitative regret transfer bounds via strongly proper losses.

Related Papers:

  1. Mingyuan Zhang, Harish G. Ramaswamy, and Shivani Agarwal. “Convex calibrated surrogates for the multi-label F-measure.”  In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020. https://www.shivani-agarwal.net/Publications/2020/icml20-multilabel-f-measure.pdf 
  2. Arpit Agarwal and Shivani Agarwal. “On consistent surrogate risk minimization and property elicitation.” In Proceedings of the 28th Annual Conference on Learning Theory (COLT), 2015. https://www.shivani-agarwal.net/Publications/2015/colt15-surrogate-elicitation.pdf
  3. Shivani Agarwal. “Surrogate regret bounds for bipartite ranking via strongly proper losses.” Journal of Machine Learning Research, 15:1653-1674, 2014. https://www.shivani-agarwal.net/Publications/2014/jmlr-14-regret-bipartite-ranking.pdf

BIO: Shivani Agarwal is Rachleff Family Associate Professor of Computer and Information Science at the University of Pennsylvania, where she also directs the NSF-sponsored Penn Institute for Foundations of Data Science (PIFODS) and co-directs the Penn Research in Machine Learning (PRiML) center. She is currently an Action Editor for the Journal of Machine Learning Research and an Associate Editor for the Harvard Data Science Review, and served as Program Co-chair for COLT 2020. Her research interests include computational, mathematical, and statistical foundations of machine learning and data science; applications of machine learning in the life sciences and beyond; and connections between machine learning and other disciplines such as economics, operations research, and psychology. Her group’s research has been selected four times for spotlight presentations at the NeurIPS conference.

Join Zoom Meeting https://umich.zoom.us/j/92211136360

Meeting ID: 922 1113 6360

Passcode: XXXXXX (Will be sent via e-mail to attendees)

Zoom Passcode information is also available upon request to Katherine Godwin ([email protected]).

Faculty Host

Clayton ScottProfessorEECS, University of Michigan