Publikation
Incorporation of the Intended Task into a Vision-based Grasp Type Predictor for Multi-fingered Robotic Grasping
Niko Kleer; Ole Keil; Martin Feick; Amr Gomaa; Tim Schwartz; Michael Feld
In: 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE International Conference on Robot and Human Interactive Communication (RO-MAN-2024), August 26-30, Pasadena, California, USA, Pages 1301-1307, IEEE, 2024.
Zusammenfassung
Robots that make use of multi-fingered or fully anthropomorphic end-effectors can engage in highly complex manipulation tasks. However, the choice of a suitable grasp for manipulating an object is strongly influenced by factors such as the physical properties of an object and the intended task. This makes predicting an appropriate grasping pose for carrying out a concrete task notably challenging. At the same time, current grasp type predictors rarely consider the task as a part of the prediction process. This work proposes a learning model that considers the task in addition to an object's visual features for predicting a suitable grasp type. Furthermore, we generate a synthetic dataset by simulating robotic grasps on 3D object models based on the BarrettHand end-effector. With an angular similarity of 0.9 and above, our model achieves competitive prediction results compared to grasp type predictors that do not consider the intended task for learning grasps. Finally, to foster research in the field, we make our synthesized dataset available to the research community.
Projekte
- CAMELOT - Kontinuierliches adaptives maschinelles Lernen für Kontrollübergabe-Situationen
- FedWell - Life-Long Federated User and Mental Modeling for Health and Well-being