
Dissertation Defense
Tool-Use Robot Manipulation Tasks for Cooperative and Explainable Operations in Safety-Critical Domains
This event is free and open to the publicAdd to Google Calendar

Hybrid Event: Ford Robotics 2300 / Zoom Passcode:326566
Abstract: For assistive tasks, robots need to be capable of performing tasks programmed by non-expert users. Tool-use and assembly tasks are of particular interest because they present many challenges such as reasoning over interactions between multiple objects and performing complex manipulation behaviors. Considering safety-critical domains further complicates robot reasoning by constraining these manipulation tasks and requiring that robots perform tool-use tasks subject to a wide range of safety considerations.
We address the problem of reliable autonomous tool manipulation in safety-critical domains. Our goal is to advance planning and execution capabilities in tool-use object manipulation tasks through simple explainable models, enabling robots to engage in dialogue about safety on human-robot teams. We address the following challenges: (1) autonomously composing multi-objective behaviors (actions that satisfy multiple goals); (2) robustly modeling tool grasps and generalizing grasps to novel tools; and (3) reasoning over and engaging in dialogue about safety while performing tasks in different domains.
To perform multi-objective manipulation tasks, we propose a causal control basis, which includes causal information describing how a multi-objective action functions in an assembly task. We demonstrate that the causal control basis reduces expert knowledge engineering for performing complex actions, making execution of these behaviors more explainable. To further improve dexterous robot manipulation, we propose a grasp reflex model, which uses tactile servoing to robustly achieve tool grasps in manipulation tasks. We show that our proposed grasp reflex model is simple enough to be explainable and generalizable enough to achieve one-shot tactile servoing on novel tool instances. To make robots active, trustworthy collaborators, we propose the human-robot red teaming paradigm for safety-aware reasoning. We demonstrate that the human-robot red team can engage in dialogue about safety and improve the team’s understanding of a problem domain, through which the robot learns to complete tasks safely and mitigate risks during task execution.
Taken together, our work emphasizes the importance of dialogue, understanding, and trust on human-robot teams and explainable methods for robot manipulation capabilities. In this way, the robot can be trusted to reliably perform complex tool-use manipulation tasks on human-robot teams in safety-critical problem domains.