Journals

SalsaNext+: A Multimodal-Based Point Cloud Semantic Segmentation With Range and RGB Images

Year: 2025
Type of Publication: Article
Journal: IEEE Access
ISSN: 2169-3536
DOI: https://doi.org/10.1109/ACCESS.2025.3559580
Abstract:
Advances in sensor fusion techniques are redefining the landscape of 3D point cloud semantic segmentation, particularly for autonomous driving applications. We propose an enhanced approach that leverages the complementary strengths of LiDAR and multi-camera systems. This study introduces two extensions to the state-of-the-art SalsaNext model based only in LiDAR: SalsaNext+RGB, which integrates RGB data into range-view (RV) images, and SalsaNext+PANO, incorporating panoramic images built from multi-camera setups. The proposed methods are evaluated using the SemanticKITTI and Panoptic nuScenes datasets, showing notable improvements in segmentation accuracy. Results indicate that RGB fusion boosts performance with minimal latency, while panoramic integration offers additional gains at the expense of higher computational load. Comparative analyses highlight significant mIoU gains, demonstrating the potential of multimodal sensor fusion for intricate driving scene understanding.
Hits: 8

Location

Escuela Politécnica Superior

Universidad de Alcalá

Campus Universitario.

Ctra. Madrid-Barcelona, Km. 33,600.

28805 Alcalá de Henares, Madrid (Spain)

Smart Elderly Car

Sarbot Team