International Journal of Applied Earth Observation and Geoinformation
Point cloud registration and change detection in urban environment using an onboard Lidar sensor and MLS reference data
This paper presents a new method for urban scene analysis, which comprises 3D point cloud registration and change detection through fusing Lidar point clouds with significantly different density characteristics. The introduced method is able to extract dynamic scene segments (traffic participants or urban renewal regions) and seasonal changes (vegetation regions) from instant 3D (i3D) measurements captured by a Rotating Multi-beam (RMB) Lidar sensor mounted onto the top of a moving vehicle. As reference data, we rely on a dense point cloud-based environment model provided by Mobile Laser Scanning (MLS) systems. The proposed approach is composed of new solutions for two main subtasks. First, a novel multimodal point cloud registration algorithm is introduced, which can improve the alignment of the sparse i3D measurements to the dense MLS data, where conventional point level registration or keypoint/segment matching strategies fail. Second, an efficient Markov Random Field-based change extraction step is implemented between the registered point clouds, which exploits that due to geometric considerations of mapping with the given sensor configuration, the essence of the problem can be solved quickly in the 2D range image domain without information loss. Experimental evaluation is conducted on a new Benchmark set that contains three different heavy traffic road sections in city center areas covering in total nearly 1 km long pathway sections. Test data consists of relevant industrial measurements provided by a state-of-the-art RMB scanner (with a point density of around 50-500 points/m2) and an up-to-date MLS system (more than 5000 points/m2). The clear advantages of the new method are quantitatively demonstrated against various reference techniques. In comparison to six different point cloud registration methods, the median value of point level distances is decreased by 1–2 orders of magnitude by the proposed approach. Regarding change detection, the new method outperforms the reference models either in F1-scores (by around 10–25%) or in computational complexity (10–1000 times faster).