Policy Blending and Recombination for Multimodal Contact-Rich TasksView Publication
Multimodal information such as tactile, proximity and force sensing is essential for performing stable contact-rich manipulations. However, coupling multimodal information and motion control still remains a challenging topic. Rather than learning a monolithic skill policy that takes in all feedback signals at all times, skills should be divided into phases and learn to only use the sensor signals applicable to that phase. This makes learning the primitive policies for each phase easier, and allows the primitive policies to be more easily reused among different skills. However, stopping and abruptly switching between each primitive policy results in longer execution times and less robust behaviours. We therefore propose a blending approach to seamlessly combining the primitive policies into a reliable combined control policy. We evaluate both time-based and state-based blending approaches. The resulting approach was successfully evaluated in simulation and on a real robot, with an augmented finger vision sensor, on: opening a cap, turning a dial and flipping a breaker tasks. The evaluations show that the blended policies with multimodal feedback can be easily learned and reliably executed.
IEEE Robotics and Automation Letters (RA-L) with ICRA2021 Presentation Option
Related PublicationsView All
Theoretical Derivation and Realization of Adaptive Grasping Based on Rotational Incipient Slip Detection
Tetsuya Narita, Satoko Nagakari, William Conus, Toshimitsu Tsuboi, Kenichiro NagasakaManipulating objects whose physical properties are unknown remains one of the greatest challenges in robotics. […]