IPAB Workshop - 23/10/25 Speaker: Leonard Hinckeldey Title: Assistax: A Hardware-Accelerated ReinforcementLearning Benchmark for Assistive Robotics Abstract: The development of reinforcement learning (RL) algorithms has been largely driven by ambitious challenge tasks and benchmarks. Games have dominated RL benchmarks because they present relevant challenges, are inexpensive to run and easy to understand. While games such as Go and Atari have led to many breakthroughs, they often do not directly translate to real-world embodied applications. In recognising the need to diversify RL benchmarks and addressing complexities that arise in embodied interaction scenarios, we introduce Assistax: an open-source benchmark designed to address challenges arising in assistive robotics tasks. Assistax uses JAX’s hardware acceleration for significant speed-ups for learning in physics-based simulations. In terms of open-loop wall-clock time, Assistax runs up to 370× faster when vectorising training runs compared to CPU-based alternatives. Assistax conceptualises the interaction between an assistive robot and an active human patient using multi-agent RL to train a population of diverse partner agents against which an embodied robotic agent’s zero-shot coordination capabilities can be tested. Extensive evaluation and hyperparameter tuning for popular continuous control RL and MARL algorithms provide reliable baselines and establish Assistax as practical benchmark for advancing RL research for assistive robotics. Speaker: Yongcheng Yao Title: Can vision-language model detect and measure in medical image? Abstract: Recent advances in large language models (LLMs) have enabled their adoption in healthcare for clinical text understanding and medical advice generation. Vision-language models (VLMs) extend this capability to visual data, showing promise in medical image interpretation. However, most existing VLMs focus on qualitative tasks, such as describing an image or classifying findings, while clinical decision-making sometimes depends on quantitative measurements, such as tumor size and some angle/distance measurement. These quantitative reasoning abilities remain largely underexplored in current VLMs.In this talk, I will present MedVision, a large-scale dataset and benchmark designed to evaluate and improve VLMs in quantitative medical image analysis. MedVision spans 22 public datasets and 30 million image–annotation pairs across diverse anatomies and modalities. We benchmark three representative quantitative tasks: (1) anatomical structure and abnormality detection, (2) tumor or lesion size estimation, and (3) angle and distance measurement. Our findings show that existing VLMs struggle on these tasks, but fine-tuning on MedVision substantially enhances their quantitative precision. Oct 23 2025 13.00 - 14.00 IPAB Workshop - 23/10/25 Leonard Hinckeldey & Yongcheng Yao G.03
IPAB Workshop - 23/10/25 Speaker: Leonard Hinckeldey Title: Assistax: A Hardware-Accelerated ReinforcementLearning Benchmark for Assistive Robotics Abstract: The development of reinforcement learning (RL) algorithms has been largely driven by ambitious challenge tasks and benchmarks. Games have dominated RL benchmarks because they present relevant challenges, are inexpensive to run and easy to understand. While games such as Go and Atari have led to many breakthroughs, they often do not directly translate to real-world embodied applications. In recognising the need to diversify RL benchmarks and addressing complexities that arise in embodied interaction scenarios, we introduce Assistax: an open-source benchmark designed to address challenges arising in assistive robotics tasks. Assistax uses JAX’s hardware acceleration for significant speed-ups for learning in physics-based simulations. In terms of open-loop wall-clock time, Assistax runs up to 370× faster when vectorising training runs compared to CPU-based alternatives. Assistax conceptualises the interaction between an assistive robot and an active human patient using multi-agent RL to train a population of diverse partner agents against which an embodied robotic agent’s zero-shot coordination capabilities can be tested. Extensive evaluation and hyperparameter tuning for popular continuous control RL and MARL algorithms provide reliable baselines and establish Assistax as practical benchmark for advancing RL research for assistive robotics. Speaker: Yongcheng Yao Title: Can vision-language model detect and measure in medical image? Abstract: Recent advances in large language models (LLMs) have enabled their adoption in healthcare for clinical text understanding and medical advice generation. Vision-language models (VLMs) extend this capability to visual data, showing promise in medical image interpretation. However, most existing VLMs focus on qualitative tasks, such as describing an image or classifying findings, while clinical decision-making sometimes depends on quantitative measurements, such as tumor size and some angle/distance measurement. These quantitative reasoning abilities remain largely underexplored in current VLMs.In this talk, I will present MedVision, a large-scale dataset and benchmark designed to evaluate and improve VLMs in quantitative medical image analysis. MedVision spans 22 public datasets and 30 million image–annotation pairs across diverse anatomies and modalities. We benchmark three representative quantitative tasks: (1) anatomical structure and abnormality detection, (2) tumor or lesion size estimation, and (3) angle and distance measurement. Our findings show that existing VLMs struggle on these tasks, but fine-tuning on MedVision substantially enhances their quantitative precision. Oct 23 2025 13.00 - 14.00 IPAB Workshop - 23/10/25 Leonard Hinckeldey & Yongcheng Yao G.03