Features in the Point Cloud: An Automatic Approach to High Density LiDAR to Camera Calibration

Abstract

In this paper, we introduce a novel method to estimate relative pose calibration parameters between the coordinate frames of high-density LiDAR sensors and optical cameras. The main contribution of this paper is the introduction of rendered synthetic LiDAR images using the point cloud reflectivity information that enable the use of 2D feature detectors to match calibration tag corners between the camera image and a dense LiDAR point cloud. Experimental results are evaluated by synthetic tests as well as real data collected using a Livox Mid-40 LiDAR and MYNT EYE D camera. Synthetic tests quantitatively demonstrate low error in transformation under a certain range of noise level. Moreover, our algorithm doesn’t require measurement of the physical size of calibration targets, which avoids the measurement error in real-world applications. Alignment on real data between the modalities shows qualitatively well aligned visual results.

Published at: Research report for M.S. Robotics degree, University of Michigan

Paper