IPAB Workshop-04/04/2024

 

Title: Few-shot self-supervised articulated object understanding and rendering

 

Abstract: Articulated object understanding, the study of objects with independently moving parts, is critical in fields like robotics, virtual/augmented reality, and animation. Despite its importance, conventional methods, primarily reliant on labeled 3D data or numerous images, face difficulties due to costly and time-consuming data collection. This work presents a unique approach, employing few-shot, self-supervised learning to separately estimate parts and motion parameters, optimizing them in an iterative manner—significantly reducing reliance on large amounts of training data.

 Our approach utilizes a pretrained renderer such as NeRF and a limited set of images depicting the object in a different articulation with noted camera positions. Our method enables the efficient estimation of motion parameters. It also allows us to use part-aware composite rendering, where the parts of the object are effectively repositioned to represent different movements. As a result, we can render images that accurately depict different articulations. Notably, our method demonstrates remarkable accuracy in motion parameter estimation, competing with or even exceeding previous methods while utilizing considerably fewer images.