MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition

MinkLoc++ computes a global descriptor from a pair of sensor readings: a 3D point cloud from LiDAR and an image from a RGB camera. Localization is performed by searching the database for a geo-tagged pair of readings with the closest descriptor.

Abstract

We introduce a discriminative multimodal descriptor based on a pair of sensor readings: a point cloud from a LiDAR and an image from an RGB camera. Our descriptor, named MinkLoc++, can be used for place recognition, re-localization and loop closure purposes in robotics or autonomous vehicles applications. We use late fusion approach, where each modality is processed separately and fused in the final part of the processing pipeline. The proposed method achieves state-of-the-art performance on standard place recognition benchmarks. We also identify dominating modality problem when training a multimodal descriptor. The problem manifests itself when the network focuses on a modality with a larger overfit to the training data. This drives the loss down during the training but leads to suboptimal performance on the evaluation set. In this work we describe how to detect and mitigate such risk when using a deep metric learning approach to train a multimodal neural network.

Publication
In International Joint Conference on Neural Networks 2021 Proceedings
Jacek Komorowski
Jacek Komorowski
Assistant Professor
Tomasz Trzciński
Tomasz Trzciński
Principal Investigator

Related