Publikation
HandVoxNet++: 3D Hand Shape and Pose Estimation using Voxel-Based Neural Networks
Jameel Malik; Didier Stricker; Sk Aziz Ali; Vladislav Golyanik; Soshi Shimada; Ahmed Elhayek; Christian Theobalt
In: IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 1, Pages 1-13, IEEE, 11/2021.
Zusammenfassung
3D hand shape and pose estimation from a single depth map is a new and challenging computer vision problem with many
applications. Existing methods addressing it directly regress hand meshes via 2D convolutional neural networks, which leads to
artefacts due to perspective distortions in the images. To address the limitations of the existing methods, we develop HandVoxNet++,
i.e., a voxel-based deep network with 3D and graph convolutions trained in a fully supervised manner. The input to our network is a 3D
voxelized-depth-map-based on the truncated signed distance function (TSDF). HandVoxNet++ relies on two hand shape
representations. The first one is the 3D voxelized grid of hand shape, which does not preserve the mesh topology and which is the
most accurate representation. The second representation is the hand surface that preserves the mesh topology. We combine the
advantages of both representations by aligning the hand surface to the voxelized hand shape either with a new neural
Graph-Convolutions-based Mesh Registration (GCN-MeshReg) or classical segment-wise Non-Rigid Gravitational Approach
(NRGA++) which does not rely on training data. In extensive evaluations on three public benchmarks, i.e., SynHand5M, depth-based
HANDS19 challenge and HO-3D, the proposed HandVoxNet++ achieves state-of-the-art performance. In this journal extension of our
previous approach presented at CVPR 2020, we gain 41:09% and 13:7% higher shape alignment accuracy on SynHand5M and
HANDS19 datasets, respectively. Our method is ranked first on the HANDS19 challenge dataset (Task 1: Depth-Based 3D Hand Pose
Estimation) at the moment of the submission of our results to the portal in August 2020.
Projekte
SmartKom - Multimodale dialogische Mensch-Technik-Interaktion,
VIDETE - Generierung von Vorwissen mit Hilfe lernender Systeme zur 4D-Analyse komplexer Szenen,
DECODE - Continual Learning zur visuellen und multimodalen Erkennung menschlicher Bewegungen und des semantischen Umfeldes in alltäglichen Umgebungen
VIDETE - Generierung von Vorwissen mit Hilfe lernender Systeme zur 4D-Analyse komplexer Szenen,
DECODE - Continual Learning zur visuellen und multimodalen Erkennung menschlicher Bewegungen und des semantischen Umfeldes in alltäglichen Umgebungen