Publikation
U-RED: Unsupervised 3D Shape Retrieval and Deformation for Partial Point Clouds
Yan Di; Chenyangguang Zhang; Ruida Zhang; Fabian Manhardt; Yongzhi Su; Jason Raphael Rambach; Didier Stricker; Xiangyang Ji; Federico Tombari
In: IEEE/CVF (Hrsg.). Proceedings of the. International Conference on Computer Vision (ICCV-2023), October 2-6, Paris, France, IEEE/CVF, 2023.
Zusammenfassung
In this paper, we propose U-RED, an Unsupervised
shape REtrieval and Deformation pipeline that takes an arbitrary
object observation as input, typically captured by
RGB images or scans, and jointly retrieves and deforms the
geometrically similar CAD models from a pre-established
database to tightly match the target. Considering existing
methods typically fail to handle noisy partial observations,
U-RED is designed to address this issue from two aspects.
First, since one partial shape may correspond to multiple
potential full shapes, the retrieval method must allow such
an ambiguous one-to-many relationship. Thereby U-RED
learns to project all possible full shapes of a partial target
onto the surface of a unit sphere. Then during inference,
each sampling on the sphere will yield a feasible retrieval.
Second, since real-world partial observations usually contain
noticeable noise, a reliable learned metric that measures
the similarity between shapes is necessary for stable
retrieval. In U-RED, we design a novel point-wise residual guided
metric that allows noise-robust comparison. Extensive
experiments on the synthetic datasets PartNet, ComplementMe
and the real-world dataset Scan2CAD demonstrate
that U-RED surpasses existing state-of-the-art approaches
by 47.3%, 16.7% and 31.6% respectively under Chamfer
Distance.