• Home
  • Publications
  • Policy Blending and Recombination for Multimodal Contact-Rich Tasks

Research Area

Author

  • Tetsuya Narita, Oliver Kroemer*
  • * External authors

Company

  • Sony Corporation

Venue

  • RA-L

Date

  • 2021

Share

Policy Blending and Recombination for Multimodal Contact-Rich Tasks

View Publication

Abstract

Multimodal information such as tactile, proximity and force sensing is essential for performing stable contact-rich manipulations. However, coupling multimodal information and motion control still remains a challenging topic. Rather than learning a monolithic skill policy that takes in all feedback signals at all times, skills should be divided into phases and learn to only use the sensor signals applicable to that phase. This makes learning the primitive policies for each phase easier, and allows the primitive policies to be more easily reused among different skills. However, stopping and abruptly switching between each primitive policy results in longer execution times and less robust behaviours. We therefore propose a blending approach to seamlessly combining the primitive policies into a reliable combined control policy. We evaluate both time-based and state-based blending approaches. The resulting approach was successfully evaluated in simulation and on a real robot, with an augmented finger vision sensor, on: opening a cap, turning a dial and flipping a breaker tasks. The evaluations show that the blended policies with multimodal feedback can be easily learned and reliably executed.

IEEE Robotics and Automation Letters (RA-L) with ICRA2021 Presentation Option

Share

この記事をシェアする