Paper accepted at Conference on Neural Information Processing Systems (NeurIPS) 2025

Held in San Diego, United States 2-7 December 2025

Elle Miller, Trevor McInroe, David Abel, Oisin Mac Aodha and Sethu Vijayakumar, Enhancing Tactile-based Reinforcement Learning for Robotic Control, Proc. Neural Information Processing Systems (NeurIPS 2025), San Diego (2025). [pdf] [video] [citation]

Achieving safe, reliable real-world robotic manipulation requires agents to evolve beyond vision and incorporate tactile sensing to overcome sensory deficits and reliance on idealised state information. Despite its potential, the efficacy of tactile sensing in reinforcement learning (RL) remains inconsistent. We address this by developing self-supervised learning (SSL) methodologies to more effectively harness tactile observations, focusing on a scalable setup of proprioception and sparse binary contacts. We empirically demonstrate that sparse binary tactile signals are critical for dexterity, particularly for interactions that proprioceptive control errors do not register, such as decoupled robot-object motions. Our agents achieve superhuman dexterity in complex contact tasks (ball bouncing and Baoding ball rotation). Furthermore, we find that decoupling the SSL memory from the onpolicy memory can improve performance. We release the Robot Tactile Olympiad (RoTO) benchmark to standardise and promote future research in tactile-based manipulation. Project page