TY - JOUR
T1 - Reinforcement learning-based optimal control of wearable alarms for consistent roadway workers’ reactions to traffic hazards
AU - Lu, Daniel
AU - Ergan, Semiha
AU - Ozbay, Kaan
N1 - Publisher Copyright:
© 2025 Taylor & Francis Group, LLC and The University of Tennessee.
PY - 2025
Y1 - 2025
N2 - Recent innovations in roadway construction include work zone intrusion alert (WZIA) systems that detect traffic hazards (e.g., speeding or intruding vehicles) and raise alarms (e.g., sounds, lights) using preset attributes (e.g., volume, duration) to warn human workers-on-foot. Designing alarms raised by wearable warning devices (e.g., smartwatch) for roadway workers remains an emerging area of transportation safety research. As roadway work zones begin to adopt these novel warning systems, issues relating to alarm attributes may persist. Different individuals’ alarm preferences and alarm fatigue towards repeated exposure to constant alarm attributes can lead to decreases in worker vigilance towards traffic hazards. Reinforcement learning (RL)-based controls can adjust alarm attributes in real-time to counteract these issues, ensuring consistent worker reactions. This study proposes an RL-based approach to train agents that control alarm attributes under different reward functions that prioritise different worker safety reactions (e.g., body movement, head turn). Results show that a reward function with equal weight for each type of reaction produces an alarm agent that ensures consistent safe worker reactions to traffic hazards. Findings also inform the future development of RL-based alarms (i.e., fine-tuning) to counteract the lack of safe worker reactions observed in real-world work zones.
AB - Recent innovations in roadway construction include work zone intrusion alert (WZIA) systems that detect traffic hazards (e.g., speeding or intruding vehicles) and raise alarms (e.g., sounds, lights) using preset attributes (e.g., volume, duration) to warn human workers-on-foot. Designing alarms raised by wearable warning devices (e.g., smartwatch) for roadway workers remains an emerging area of transportation safety research. As roadway work zones begin to adopt these novel warning systems, issues relating to alarm attributes may persist. Different individuals’ alarm preferences and alarm fatigue towards repeated exposure to constant alarm attributes can lead to decreases in worker vigilance towards traffic hazards. Reinforcement learning (RL)-based controls can adjust alarm attributes in real-time to counteract these issues, ensuring consistent worker reactions. This study proposes an RL-based approach to train agents that control alarm attributes under different reward functions that prioritise different worker safety reactions (e.g., body movement, head turn). Results show that a reward function with equal weight for each type of reaction produces an alarm agent that ensures consistent safe worker reactions to traffic hazards. Findings also inform the future development of RL-based alarms (i.e., fine-tuning) to counteract the lack of safe worker reactions observed in real-world work zones.
KW - alarm fatigue
KW - reinforcement learning
KW - roadway worker safety
KW - virtual reality
KW - wearable warning device
UR - http://www.scopus.com/inward/record.url?scp=85214693896&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85214693896&partnerID=8YFLogxK
U2 - 10.1080/19439962.2024.2449119
DO - 10.1080/19439962.2024.2449119
M3 - Article
AN - SCOPUS:85214693896
SN - 1943-9962
JO - Journal of Transportation Safety and Security
JF - Journal of Transportation Safety and Security
ER -