TY - GEN
T1 - Modeling and reproducing human daily travel behavior from GPS data
T2 - 1st ACM SIGSPATIAL Workshop on Prediction of Human Mobility, PredictGIS 2017
AU - Pang, Yanbo
AU - Yabe, Takahiro
AU - Tsubouchi, Kota
AU - Sekimoto, Yoshihide
N1 - Publisher Copyright:
© 2017 Association for Computing Machinery.
PY - 2017/11/7
Y1 - 2017/11/7
N2 - Understanding the daily movement of humans in space and time on different granularity levels is of critical value for urban planning, transport management, health care and commercial services. However, population's daily travel behavior data was collected by travel surveys that are infrequent, expensive, and disable to reflect changes in transportation. The demand for capturing, modeling and reproducing human travel behavior in different scenarios pose a challenge on the significant delays. In this study, we propose an inverse reinforcement learning based formulation for training an agent model that enables modeling complex decision-making with consideration of a dynamic environment on the urban granularity level. The modeling framework is based on the Markov decision process to represent an individual's decision making. To obtain the travel behavior characteristics of real humans, we apply the proposed approach to a real-time GPS dataset collected via a smart phone application with more than 2 million daily users to model the people travel behavior for different daily scenarios (i.e., weekdays, weekends, and national holidays) in the Tokyo metropolitan area. It is found that the developed model can generate individual's daily travel plan. In addition, by aggregating the agent travel behavior on the city-wide scale, the urban daily travel demand can be obtained and used for estimate the hourly population distribution. The result of this work can also be regarded as a synthetic mobility dataset, avoiding many of the privacy concerns surrounding real GPS data.
AB - Understanding the daily movement of humans in space and time on different granularity levels is of critical value for urban planning, transport management, health care and commercial services. However, population's daily travel behavior data was collected by travel surveys that are infrequent, expensive, and disable to reflect changes in transportation. The demand for capturing, modeling and reproducing human travel behavior in different scenarios pose a challenge on the significant delays. In this study, we propose an inverse reinforcement learning based formulation for training an agent model that enables modeling complex decision-making with consideration of a dynamic environment on the urban granularity level. The modeling framework is based on the Markov decision process to represent an individual's decision making. To obtain the travel behavior characteristics of real humans, we apply the proposed approach to a real-time GPS dataset collected via a smart phone application with more than 2 million daily users to model the people travel behavior for different daily scenarios (i.e., weekdays, weekends, and national holidays) in the Tokyo metropolitan area. It is found that the developed model can generate individual's daily travel plan. In addition, by aggregating the agent travel behavior on the city-wide scale, the urban daily travel demand can be obtained and used for estimate the hourly population distribution. The result of this work can also be regarded as a synthetic mobility dataset, avoiding many of the privacy concerns surrounding real GPS data.
KW - Daily travel behavior
KW - Human mobility
KW - Urban dynamic
UR - http://www.scopus.com/inward/record.url?scp=85050971626&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85050971626&partnerID=8YFLogxK
U2 - 10.1145/3152341.3152347
DO - 10.1145/3152341.3152347
M3 - Conference contribution
AN - SCOPUS:85050971626
T3 - Proceedings of the 1st ACM SIGSPATIAL Workshop on Prediction of Human Mobility, PredictGIS 2017
BT - Proceedings of the 1st ACM SIGSPATIAL Workshop on Prediction of Human Mobility, PredictGIS 2017
PB - Association for Computing Machinery, Inc
Y2 - 7 November 2017 through 10 November 2017
ER -