TY - JOUR
T1 - V2X-Sim
T2 - Multi-Agent Collaborative Perception Dataset and Benchmark for Autonomous Driving
AU - Li, Yiming
AU - Ma, Dekun
AU - An, Ziyan
AU - Wang, Zixun
AU - Zhong, Yiqi
AU - Chen, Siheng
AU - Feng, Chen
N1 - Funding Information:
This work was supported in part by the NSF CPS Program under Grant CMMI-1932187 and CNS-2121391, in part by the National Natural Science Foundation of China under Grant 62171276, in part by the Science and Technology Commission of Shanghai Municipal under Grant 21511100900, and in part by CALT under Grant 2021-01.
Publisher Copyright:
© 2022 IEEE.
PY - 2022/10/1
Y1 - 2022/10/1
N2 - Vehicle-to-everything (V2X) communication techniques enable the collaboration between vehicles and many other entities in the neighboring environment, which could fundamentally improve the perception system for autonomous driving. However, the lack of a public dataset significantly restricts the research progress of collaborative perception. To fill this gap, we present V2X-Sim, a comprehensive simulated multi-agent perception dataset for V2X-aided autonomous driving. V2X-Sim provides: (1) multi-agent sensor recordings from the road-side unit (RSU) and multiple vehicles that enable collaborative perception, (2) multi-modality sensor streams that facilitate multi-modality perception, and (3) diverse ground truths that support various perception tasks. Meanwhile, we build an open-source testbed and provide a benchmark for the state-of-the-art collaborative perception algorithms on three tasks, including detection, tracking and segmentation. V2X-Sim seeks to stimulate collaborative perception research for autonomous driving before realistic datasets become widely available.
AB - Vehicle-to-everything (V2X) communication techniques enable the collaboration between vehicles and many other entities in the neighboring environment, which could fundamentally improve the perception system for autonomous driving. However, the lack of a public dataset significantly restricts the research progress of collaborative perception. To fill this gap, we present V2X-Sim, a comprehensive simulated multi-agent perception dataset for V2X-aided autonomous driving. V2X-Sim provides: (1) multi-agent sensor recordings from the road-side unit (RSU) and multiple vehicles that enable collaborative perception, (2) multi-modality sensor streams that facilitate multi-modality perception, and (3) diverse ground truths that support various perception tasks. Meanwhile, we build an open-source testbed and provide a benchmark for the state-of-the-art collaborative perception algorithms on three tasks, including detection, tracking and segmentation. V2X-Sim seeks to stimulate collaborative perception research for autonomous driving before realistic datasets become widely available.
KW - Deep learning for visual perception
KW - data sets for robotic vision
KW - multi-robot systems
UR - http://www.scopus.com/inward/record.url?scp=85135243056&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85135243056&partnerID=8YFLogxK
U2 - 10.1109/LRA.2022.3192802
DO - 10.1109/LRA.2022.3192802
M3 - Article
AN - SCOPUS:85135243056
SN - 2377-3766
VL - 7
SP - 10914
EP - 10921
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 4
ER -