TY - GEN
T1 - Acquiring Abstract Visual Knowledge of the Real-World Environment for Autonomous Vehicles
AU - Ghalyan, Ibrahim F.J.
AU - Kapila, Vikram
N1 - Funding Information:
This work is supported in part by the National Science Foundation grants DRK-12 DRL: 1417769, ITEST DRL: 1614085, and RET Site EEC: 1542286, and NY Space Grant Consortium grant 76156-10488.
Funding Information:
ACKNOWLEDGMENT This work is supported in part by the National Science Foundation grants DRK-12 DRL: 1417769, ITEST DRL: 1614085, and RET Site EEC: 1542286, and NY Space Grant Consortium grant 76156-10488.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - This paper considers the problem of modeling the surrounding environment of a driven car by using the images captured by a dash cam during the driving process. Inspired from a human driver's interpretation of the car's surrounding environment, an abstract representation of the environment is developed that can facilitate in decision-making to prevent the car's collisions with surrounding objects. The proposed technique for modeling the car's surrounding environment utilizes the dash cam to capture images as the car is driven facing multiple situations and obstacles. By relying on the human driver's interpretation of various driving scenarios, the images of the car's surrounding environment are manually grouped into classes that reflect the driver's abstract knowledge. Grouping the images allows the formulation of knowledge transfer process from the human driver to an autonomous vehicle as a classification problem, producing a meaningful and efficient representation of models arising from real-world scenarios. The framework of convolutional neural networks (CNN) is employed to model the surrounding environment of the driven car, encapsulating the abstract knowledge of the human driver. The proposed modeling approach is applied to determine its efficacy in two experimental scenarios. In the first experiment, a highway driving scenario is considered with three classes. Alternatively, in the second experiment, a scenario of driving in a residential area is addressed with six classes. Excellent modeling performance is reported for both experiments. Comparisons conducted with alternative image classification techniques reveal the superiority of the CNN for modeling the considered driving scenarios.
AB - This paper considers the problem of modeling the surrounding environment of a driven car by using the images captured by a dash cam during the driving process. Inspired from a human driver's interpretation of the car's surrounding environment, an abstract representation of the environment is developed that can facilitate in decision-making to prevent the car's collisions with surrounding objects. The proposed technique for modeling the car's surrounding environment utilizes the dash cam to capture images as the car is driven facing multiple situations and obstacles. By relying on the human driver's interpretation of various driving scenarios, the images of the car's surrounding environment are manually grouped into classes that reflect the driver's abstract knowledge. Grouping the images allows the formulation of knowledge transfer process from the human driver to an autonomous vehicle as a classification problem, producing a meaningful and efficient representation of models arising from real-world scenarios. The framework of convolutional neural networks (CNN) is employed to model the surrounding environment of the driven car, encapsulating the abstract knowledge of the human driver. The proposed modeling approach is applied to determine its efficacy in two experimental scenarios. In the first experiment, a highway driving scenario is considered with three classes. Alternatively, in the second experiment, a scenario of driving in a residential area is addressed with six classes. Excellent modeling performance is reported for both experiments. Comparisons conducted with alternative image classification techniques reveal the superiority of the CNN for modeling the considered driving scenarios.
UR - http://www.scopus.com/inward/record.url?scp=85065989602&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85065989602&partnerID=8YFLogxK
U2 - 10.1109/AIPR.2018.8707386
DO - 10.1109/AIPR.2018.8707386
M3 - Conference contribution
AN - SCOPUS:85065989602
T3 - Proceedings - Applied Imagery Pattern Recognition Workshop
BT - 2018 IEEE Applied Imagery Pattern Recognition Workshop, AIPR 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE Applied Imagery Pattern Recognition Workshop, AIPR 2018
Y2 - 9 October 2018 through 11 October 2018
ER -