Skip to main content Skip to main navigation

Publication

Leveraging Publicly Available Textual Object Descriptions for Anthropomorphic Robotic Grasp Predictions

Niko Kleer; Martin Feick; Michael Feld
In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS-2022), October 23-27, Kyoto, Japan, Pages 7476-7483, IEEE, 2022.

Abstract

Robotic systems using anthropomorphic end-effectors face tremendous challenges choosing a suitable pose for grasping an object. The fact that the choice of a grasp is influenced by the physical properties of an object, the intended task, and the environment results in a considerable amount of variables. The majority of models targeted towards enabling such robots to determine a suitable grasping pose rely on computer vision techniques, sometimes complemented by textual data. This paper investigates the potential of publicly available textual descriptions to predict a suitable grasping pose for anthropomorphic end-effectors. To this end, we have retrieved textual descriptions from Wikipedia, Wiktionary, and WordNet as well as a number of well-known dictionaries for 100 everyday objects. We analyze and compare the prediction quality of multiple learning methods while showing that a support vector machine-based approach can utilize this data for achieving a prediction accuracy above 0.75. Finally, we make our collected data available to the research community.

Projects

More links