Skip to main content Skip to main navigation


Single Frame Semantic Segmentation Using Multi-Modal Spherical Images

Guttikonda Suresh; Jason Raphael Rambach
In: Proceedings of. IEEE Winter Conference on Applications of Computer Vision (WACV-2024), IEEE Xplore, 2024.


In recent years, the research community has shown a lot of interest to panoramic images that offer a 360-degree directional perspective. Multiple data modalities can be fed, and complimentary characteristics can be utilized for more robust and rich scene interpretation based on semantic segmentation, to fully realize the potential. Existing research, however, mostly concentrated on pinhole RGB-X semantic segmentation. In this study, we propose a transformer-based cross-modal fusion architecture to bridge the gap between multi-modal fusion and omnidirectional scene perception. We employ distortion-aware modules to address extreme object deformations and panorama distortions that result from equirectangular representation. Additionally, we conduct cross-modal interactions for feature rectification and information exchange before merging the features in order to communicate long-range contexts for bi-modal and tri-modal feature streams. In thorough tests using combinations of four different modality types in three indoor panoramic-view datasets, our technique achieved state-of-the-art mIoU performance: 60.60% on Stanford2D3DS (RGB-HHA), 71.97% on Structured3D (RGB-D-N), and 35.92% on Matterport3D (RGB-D).