IPAB Workshop - 2/4/26 Title: Leveraging simulation for generalisable contact-rich skill learning Abstract: Humans can naturally adapt their motions to the physical properties of the objects they interact with. For example, when wiping a surface, we adjust our movement and applied force depending on the stiffness and friction of the sponge, cleaning effectively without damaging the surface. To achieve such capabilities, robots must actively interact with objects to infer hidden physical properties and adjust their motions, often using force and tactile feedback. However, discovering such exploratory behaviours and collecting sufficiently diverse multisensory interaction data is impractical and expensive with real robots. In this talk, I present my work on leveraging simulation to enable robots to identify object properties through interaction and adapt downstream task motions. First, I introduce a reinforcement learning algorithm that learns exploratory behaviours to identify task-relevant properties of unknown objects, uses uncertainty-based policy switching from exploration to task execution, and adapts motions accordingly, enabling zero-shot sim-to-real transfer. Second, I present imitation learning frameworks that combine large-scale simulation with force and tactile sensing to learn skills that generalise across diverse object properties from only a small number of real-world demonstrations. Through experiments in both simulation and on physical robots, I demonstrate how simulation enables robots to learn generalisable skills—including highly dynamic tasks and dexterous tool manipulation using force, tactile, and proximity sensing—from limited real-world data. Apr 02 2026 13.00 - 14.00 IPAB Workshop - 2/4/26 Marina Aoyama MF2
IPAB Workshop - 2/4/26 Title: Leveraging simulation for generalisable contact-rich skill learning Abstract: Humans can naturally adapt their motions to the physical properties of the objects they interact with. For example, when wiping a surface, we adjust our movement and applied force depending on the stiffness and friction of the sponge, cleaning effectively without damaging the surface. To achieve such capabilities, robots must actively interact with objects to infer hidden physical properties and adjust their motions, often using force and tactile feedback. However, discovering such exploratory behaviours and collecting sufficiently diverse multisensory interaction data is impractical and expensive with real robots. In this talk, I present my work on leveraging simulation to enable robots to identify object properties through interaction and adapt downstream task motions. First, I introduce a reinforcement learning algorithm that learns exploratory behaviours to identify task-relevant properties of unknown objects, uses uncertainty-based policy switching from exploration to task execution, and adapts motions accordingly, enabling zero-shot sim-to-real transfer. Second, I present imitation learning frameworks that combine large-scale simulation with force and tactile sensing to learn skills that generalise across diverse object properties from only a small number of real-world demonstrations. Through experiments in both simulation and on physical robots, I demonstrate how simulation enables robots to learn generalisable skills—including highly dynamic tasks and dexterous tool manipulation using force, tactile, and proximity sensing—from limited real-world data. Apr 02 2026 13.00 - 14.00 IPAB Workshop - 2/4/26 Marina Aoyama MF2