A Robust and Runtime-Efficient Track-to-Track Fusion for Automotive Perception Systems
Advanced driver assistance and autonomous systems require an enhanced perception system, fusing the data of multiple sensors. Many automotive sensors provide high-level data, such as tracked objects, i.e., tracks, usually fused in a track-to-track manner. The core of this fusion is the track-to-track association (T2TA), intending to create assignments between the tracks. In conventional T2TA, the assignment likelihood function is derived from “diffuse prior,” neglecting that the sensors may provide duplicated tracks of a target. Moreover, conventional association algorithms are usually computationally demanding due to the combinatorial nature of the problem. The first motivation of this work is to obtain a computationally efficient and robust solution, reflecting these problems. Another crucial element of track-to-track fusion is track management, which maintains the list of tracks by initializing and deleting tracks, thus having a great impact on the reliability of the fusion output. In this article, we propose a novel track-to-track fusion architecture in which the fused tracks are fed back to the association. The proposed method comprises two main contributions. First, a computationally efficient association algorithm is provided in which the “diffuse prior” is replaced with an informative prior, exploiting the feedback loop of the fused tracks. Moreover, it tackles duplicated tracks. Second, a track management system (TMS) relying on a revamped track existence probability fusion is proposed, contributing to efficient false track filtering and continuous object tracking. The proposed methodology is evaluated on real-world data of a frontal perception system. The results show that the proposed association outperforms the conventional methods; still, it maintains a favorable complexity, contributing to real-time applicability. The TMS relying on the revamped existence probability fusion can efficiently filter false tracks and continuously track objects. Moreover, the resulting overall track-to-track fusion outperforms the state-of-the-art multiobject tracking-based fusion algorithms.