The University of Edinburgh (UoE) and Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS) collaboration project. OverviewThe collaboration between the University of Edinburgh (UoE) and Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS) aims at fundamental and applied researches in artificial intelligence and robotics. The current research focus on the areas of perception, motion planning and control in the field of high dimensional robotics system and HRI.People University of Edinburgh (UoE)Principal Investigator: Prof. Sethu Vijayakumar Group Leader: Dr. Songyan Xin, Dr. Lei Yan Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS)Principal Investigator: Assistant Prof. LAM, Tin LunGroup Leader: Dr. Tianwei Zhang Image Three Scientific PillarsMulti-Contact Planning and Control Multi-Agent Collaborative Manipulation Robot Perception Multi-Contact Planning and ControlThe basic requirement for a robot is the ability to locomote and manipulate. Locomotion and manipulation in cluttered spaces result in extra contacts between several parts of the robot and the environment. These contacts can be exploited to enhance both locomotion and manipulation capabilities. Multi-contact planning extends beyond foot step planning and balance control. If we define locomotion as a problem of moving the body of a robot and manipulation as moving external objects, they can both be achieved through contact planning and control between the robot and the environment. The difficulty of multi-contact planning problem mainly comes from three aspects: The discrete choice of the sequence of contact combinations The continuous choice of contact locations and timing The continuous choice of a path between two contact combinations (transition) Image The goal of the research is a general framework that can handle all scenarios by formulating contacts as geometric constraints and contact force variables in a general numerical optimization problem. Platform & SystemTalos is an advanced humanoid designed to perform complex tasks in research and industrial settings. It uses torque control to move its limbs and can walk on uneven terrain, sense its environment, and operate power tools. Image Torque sensors in all joints. Fully electrical. Capable of lifting 6-kilogram payloads per arm. Full EtherCAT communications. ROS-enabled. Dynamic walking, including on uneven ground and stairs. Can complete tasks such as drilling or screwing. Human-robot interaction skills. Multi-Agent Collaborative ManipulationMulti-agent collaborative manipulation, including bimanual manipulation, multi-robot collaboration and human-robot collaboration, aims at exploring the ability of intelligent robotic systems in complex dynamic environment and improve human working efficiency in manufacturing factory. This research mainly focus on: Image Multiple phase/contact optimization for large momentum object manipulationAutonomous task allocation and distributed control for multi-agent collaborationPolicy learning and adaptive control for efficient human-robot collaborationPlatform & System Image LBR iiwa Robot ManipulatorLBR iiwa is a 7-axis robot arm designed for safe human-robot collaboration in the workspace. It has joint-torque sensors in all axes to move precisely and detect contact with humans and objects. KUKA Mobile PlatformOmnidirectional, mobile platform that navigates autonomously and flexibly. Combined with the latest KUKA Sunrise controller, it provides modular, versatile and above all mobile production concepts for the industry of the future. Shadow Dexterous HandWith 20 actuated degrees of freedom, absolute position and force sensors, and ultra sensitive touch sensors on the fingertips, the hand provides unique capabilities for problems that require the closest approximation of the human hand currently possible. Robot PerceptionPerception includes proprioception and exteroception. One of the main research focuses will be state estimation from multi-modal information. For exteroceptive perception, we mainly focus on robot vision which can provide rich information about surroundings with LIDAR, depth sensors and cameras. This is because there is still a large amount of data without proper processing, which is useless for robots. Another important part of the research is the post-processing and understanding of sensor data. We are going to develop efficient semantic understanding system to extract environment semantic information, such as plane surfaces, moving objects and the environment dynamic status. This information will be used as inputs for robot motion planning and control. Classic computer vision research usually focuses on static environment, our research will not be limited to this. The ultimate goal is to apply the techniques to real world applications, so we will also explore perceptions in real-life dynamic environments in which humans are involved. State Estimation from Multi-Modal InformationVision-Based Tracking in Dynamic Environments Research Topics How to find and deal with the Dynamic object using LIDAR or RGB camera in robot working space? How to build a robot understandable map using camera or LIDAR point clouds? How to fuse different sensing data (camera, LIDAR, IMU and T/F sensors) for robust robot state estimation? Image Platform & SystemLIDAR Kinect RealSense This article was published on 2024-12-08