Enabling Human-Aware Automation: A Dynamical Systems Perspective on Human Cognition
This event is free and open to the publicAdd to Google Calendar
***Event will take place via Zoom. Zoom link and password will be distributed to the Controls Group e-mail list-serv. To join this list-serv, please send an (empty) email message to email@example.com with the word “subscribe” in the subject line. Zoom information is also available upon request to Katherine Godwin (firstname.lastname@example.org).
ABSTRACT: Across many sectors, ranging from manufacturing to healthcare to the military theater, there is growing interest in the potential impact of automation that is truly collaborative with humans. Realizing this impact, though, rests on first addressing the fundamental challenge of designing automation to be aware of, and responsive to, the human with whom it is interacting. While a significant body of work exists in intent inference based on human motion, a human’s physical actions alone are not necessarily a predictor of their decision-making. Indeed, cognitive factors, such as trust and workload, play a substantial role in their decision making as it relates to interactions with autonomous systems. In this talk, I will describe our interdisciplinary efforts at tackling this problem, focusing on recent work in which we synthesized a near-optimal control policy using a trust-workload POMDP (partially-observable Markov decision process) model framework that captures changes in human trust and workload for a context involving interactions between a human and an intelligent decision-aid system. Using transparency as the feedback variable, we designed a policy to balance competing performance objectives in a reconnaissance mission study in which a virtual robotic assistant aids human subjects in surveying buildings for physical threats. I will present experimental validation of our control algorithm through human subject studies and highlight how our approach is able to mitigate the negative consequences of “over trust” that can occur in such interactions. I will also discuss our related work involving the use of psychophysiological data and classification techniques as an alternative method toward real-time trust estimation and its implications for human interactions with automation.
BIO: Dr. Neera Jain is an Assistant Professor in the School of Mechanical Engineering and a faculty member in the Ray W. Herrick Laboratories at Purdue University. She directs the Jain Research Laboratory whose vision is to advance technologies that will have a lasting impact on society through a systems-based approach grounded in dynamic modeling and control theory. A major thrust of her research is the design of human-aware automation through control-oriented modeling of human cognition. A second major research thrust is control co-design, with applications to complex energy systems. Dr. Jain earned her M.S. and Ph.D. degrees in mechanical engineering from the University of Illinois at Urbana-Champaign in 2009 and 2013, respectively. She earned her S.B. from the Massachusetts Institute of Technology in 2006. Upon completing her Ph.D., Dr. Jain was a visiting member of the research staff in the Mechatronics Group at Mitsubishi Electric Research Laboratories in Cambridge, MA where she designed model predictive control algorithms for HVAC systems. In 2015 she was a visiting summer researcher at the Air Force Research Laboratory at Wright-Patterson Air Force Base. Dr. Jain and her research have been featured in NPR and Axios. As a contributor for Forbes.com, she writes on the topic of human interaction with automation and its importance in society. Her research has been supported by the National Science Foundation, Air Force Research Laboratory, Office of Naval Research, as well as private industry.