Touch if it’s transparent!
ACTOR: Active Tactile-based Category-Level Transparent Object Reconstruction
Accurate shape reconstruction of transparent objects is a challenging task due to their non-Lambertian surfaces yet necessary for robots for accurate pose perception and safe manipulation. As vision-based sensing can produce erroneous measurements for transparent objects, the tactile modality is not sensitive to object transparency and can be used for reconstructing the object’s shape. We propose ACTOR, a novel framework for ACtive tactile-based category-level Transparent Object Reconstruction. ACTOR leverages large synthetic object datasets with our proposed self-supervised learning approach for object shape reconstruction as the collection of real-world tactile data is prohibitively expensive. ACTOR can be used during inference with tactile data from category-level unknown transparent objects to reconstruct them on-the-fly. Furthermore, we propose an active-tactile object exploration strategy with different tactile actions as probing every part of the object surface can be sample inefficient. Moreover, we demonstrate category-level object pose estimation and object recognition using ACTOR while performing the reconstruction task in-situ. We perform an extensive evaluation of our proposed methodology with real-world robotic experiments with comprehensive comparison studies with state-of-art approaches. Our proposed method outperforms state-of-art approaches in terms of tactile-based object reconstruction and object pose estimation.