MVPCC-Net: multi-view based point cloud completion network for MLS data
In this paper, we introduce a novel multi view-based method for completing high-resolution 3D point clouds of partial object shapes obtained by mobile laser scanning (MLS) platforms. Our approach estimates both the geometry and color cues of the missing or incomplete object segments, by projecting the 3D input point cloud by multiple virtual cameras, and performing 2D inpainting in the image domains of the different views. In contrast to existing state-of-the-art methods, our method can generate point clouds consisting of a variable number of points, depending on the detailedness of the input measurement, which property highly facilitates the efficient processing of MLS data with inhomogeneous point density. For training and quantitative evaluation of the proposed method, we provide a new point cloud dataset that consists of both synthetic point clouds of four different street objects with accurate ground truth, and real MLS measurements of partially or fully scanned vehicles. The quantitative and qualitative experiments on the provided dataset demonstrate that our method surpasses state-of-the-art approaches in reconstructing the local fine geometric structures as well as in estimating the overall shape and color pattern of the objects.