Communications and Signal Processing Seminar
Learning distributions and hypothesis testing via social learning
Add to Google Calendar
We consider problems of distributed estimation and hypothesis testing in a social learning setting. Individual nodes in a network receive noisy local (private) observations whose distribution is parameterized by a discrete parameter (hypotheses). The conditional distributions are known locally at the nodes, but the true parameter/hypothesis is not known. We first consider a simple setup in which nodes must estimate the empirical distribution of their estimates and analyze a general update rule using tools from stochastic approximation. We then turn to a more complex setting of hypothesis testing and analyze a social updating rule that combines Bayesian inference and linear consensus. We show that in this setting the exponential rate of learning has a natural interpretation as the correlation between the network centralities (influence) of the nodes and divergence parameters (discernment) for their local tests.
Anand D. Sarwate joined as an Assistant Professor in the Department of Electrical and Computer Engineering at Rutgers University in 2014. He received B.S. degrees in Electrical Engineering and Mathematics from MIT in 2002, an M.S. in Electrical Engineering from UC Berkeley in 2005 and a PhD in Electrical Engineering from UC Berkeley in 2008. From 2008-2011 he was a postdoctoral researcher at the Information Theory and Applications Center at UC San Diego and from 2011-2013 he was a Research Assistant Professor at the Toyota Technological Institute at Chicago. He is the recipient of the NSF CAREER award in 2015. His research areas are in information theory, machine learning, and signal processing, with focus on distributed inference and learning, privacy and security, and applications in biomedical research.