论文标题
通过整合基于模型的估计和图形学习的协作对象本地化,多视图传感器融合
Multi-view Sensor Fusion by Integrating Model-based Estimation and Graph Learning for Collaborative Object Localization
论文作者
论文摘要
协作对象本地化旨在协作估计从多个视图或观点观察到的对象的位置,这对于多代理系统(例如连接车辆)是关键的能力。为了实现协作本地化,已经开发了几种基于模型的状态估计和基于学习的本地化方法。鉴于它们令人鼓舞的性能,基于模型的状态估计通常缺乏对多个对象之间复杂关系进行建模的能力,而基于学习的方法通常无法从任意视图数量的观察结果融合观测值,并且不能很好地模拟不确定性。在本文中,我们引入了一种新型时空图形滤波器方法,该方法集成了图形学习和基于模型的估计,以执行多视图传感器融合以进行协作对象定位。我们的方法使用新的时空图表示复杂的对象关系对物体关系进行建模,并以贝叶斯方式融合了多视图观测值,以改善不确定性下的位置估计。我们在连接的自动驾驶和多个行人定位的应用中评估我们的方法。实验结果表明,我们的方法表现优于先前的技术,并在协作本地化方面实现了最新的表现。
Collaborative object localization aims to collaboratively estimate locations of objects observed from multiple views or perspectives, which is a critical ability for multi-agent systems such as connected vehicles. To enable collaborative localization, several model-based state estimation and learning-based localization methods have been developed. Given their encouraging performance, model-based state estimation often lacks the ability to model the complex relationships among multiple objects, while learning-based methods are typically not able to fuse the observations from an arbitrary number of views and cannot well model uncertainty. In this paper, we introduce a novel spatiotemporal graph filter approach that integrates graph learning and model-based estimation to perform multi-view sensor fusion for collaborative object localization. Our approach models complex object relationships using a new spatiotemporal graph representation and fuses multi-view observations in a Bayesian fashion to improve location estimation under uncertainty. We evaluate our approach in the applications of connected autonomous driving and multiple pedestrian localization. Experimental results show that our approach outperforms previous techniques and achieves the state-of-the-art performance on collaboration localization.