For robots to assist humanity in homes, or hospitals, the capability to manipulate diverse objects is imperative. So far, however, robotic manipulation technology has struggled in managing the uncertainty and unstructuredness that characterize human environments.
Machine learning is a natural approach – the robot can adapt to a given scenario, even if it was not programmed to handle it beforehand. Indeed, Deep Reinforcement Learning (deep RL), which has recently led to AI breakthroughs in computer games, has been publicized as the learning-based approach to robotics. To date, however, deep RL studies focused on known and observable systems, where uncertainty was resolved by lengthy trial and error. Quickly learning to act in novel environments, as required for robotics, is not yet within our reach.
In this research, our overarching goal is to develop the algorithmic framework of using deep learning in problems that tightly couple perception, planning, and control, for advancing robotic AI to reliably manipulate general objects in unstructured environments.
Towards this end, we shall develop neural network representations of uncertainty, and algorithms that estimate uncertainty from data. We will develop theory and algorithms for decision making under uncertainty, bringing in a fresh perspective to the problem based on Bayesian reinforcement learning (Bayes-RL). These advances will allow us to develop a general and practical methodology for learning-based robotic manipulation under uncertainty, validated via real robot experiments.