Skip to main content Skip to main navigation

Publikation

LiREC-Net: A Target-Free and Learning-Based Network for LiDAR, RGB, and Event Calibration

Aditya Ranjan Dash; Ramy Battrawy; René Schuster; Didier Stricker
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). International Conference on Computer Vision and Pattern Recognition (CVPR-2026), Computer Vision and Pattern Recognition, located at IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), June 3-7, Denver, Colorado, USA, Computer Vision Foundation (CVF) and IEEE, 2026.

Zusammenfassung

Advanced autonomous systems rely on multi-sensor fusion for safer and more robust perception. To enable effective fusion, calibrating directly from natural driving scenes (i.e., target-free) with high accuracy is crucial for precise multi-sensor alignment. Existing learning-based calibration methods are typically designed for only a single pair of sensor modalities (i.e., a bi-modal setup). Unlike these methods, we propose LiREC-Net, a target-free, learning- based calibration network that jointly calibrates multiple sensor modality pairs, including LiDAR, RGB, and event data, within a unified framework. To reduce redundant computation and improve efficiency, we introduce a shared LiDAR representation that leverages features from both its 3D nature and projected depth map, ensuring better consistency across modalities. Trained and evaluated on established datasets, such as KITTI and DSEC, our LiREC-Net achieves competitive performance to bi-modal models and sets a new strong baseline for the tri-modal use case.

Projekte