Abstract
Emergency vehicles (EMVs) play a crucial role in responding to time-critical calls such as medical emergencies and fire outbreaks in urban areas. Existing methods for EMV dispatch typically optimize routes based on historical traffic-flow data and design traffic signal pre-emption accordingly; however, we still lack a systematic methodology to address the coupling between EMV routing and traffic signal control. In this paper, we propose EMVLight, a decentralized reinforcement learning (RL) framework for joint dynamic EMV routing and traffic signal pre-emption. We adopt the multi-agent advantage actor–critic method with policy sharing and spatial discounted factor. This framework addresses the coupling between EMV navigation and traffic signal control via an innovative design of multi-class RL agents and a novel pressure-based reward function. The proposed methodology enables EMVLight to learn network-level cooperative traffic signal phasing strategies that not only reduce EMV travel time but also shortens the travel time of non-EMVs. Simulation-based experiments indicate that EMVLight enables up to a 42.6% reduction in EMV travel time as well as an 23.5% shorter average travel time compared with existing approaches.
Original language | English (US) |
---|---|
Article number | 103955 |
Journal | Transportation Research Part C: Emerging Technologies |
Volume | 146 |
DOIs | |
State | Published - Jan 2023 |
Keywords
- Deep reinforcement learning
- Emergency vehicle management
- Multi-agent system
- Traffic signal control
ASJC Scopus subject areas
- Civil and Structural Engineering
- Automotive Engineering
- Transportation
- Management Science and Operations Research