Multi-Level Bayesian Models for Environment Perception
This book deals with selected problems of machine perception, using various 2D and 3D imaging sensors. It proposes several new original methods, and also provides a detailed state-of-the-art overview of existing techniques for automated, multi-level interpretation of the observed static or dynamic environment. To ensure a sound theoretical basis of the new models, the surveys and algorithmic developments are performed in well-established Bayesian frameworks. Low level scene understanding functions are formulated as various image segmentation problems, where the advantages of probabilistic inference techniques such as Markov Random Fields (MRF) or Mixed Markov Models are considered. For the object level scene analysis, the book mainly relies on the literature of Marked Point Process (MPP) approaches, which consider strong geometric and prior interaction constraints in object population modeling. In particular, key developments are introduced in the spatial hierarchical decomposition of the observed scenarios, and in the temporal extension of complex MRF and MPP models. Apart from utilizing conventional optical sensors, case studies are provided on passive radar (ISAR) and Lidar-based Bayesian environment perception tasks. It is shown, via several experiments, that the proposed contributions embedded into a strict mathematical toolkit can significantly improve the results in real world 2D/3D test images and videos, for applications in video surveillance, smart city monitoring, autonomous driving, remote sensing, and optical industrial inspection.